1 /* Integrated Register Allocator (IRA) entry point.
2 Copyright (C) 2006-2014 Free Software Foundation, Inc.
3 Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5 This file is part of GCC.
7 GCC is free software; you can redistribute it and/or modify it under
8 the terms of the GNU General Public License as published by the Free
9 Software Foundation; either version 3, or (at your option) any later
12 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
13 WARRANTY; without even the implied warranty of MERCHANTABILITY or
14 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
21 /* The integrated register allocator (IRA) is a
22 regional register allocator performing graph coloring on a top-down
23 traversal of nested regions. Graph coloring in a region is based
24 on Chaitin-Briggs algorithm. It is called integrated because
25 register coalescing, register live range splitting, and choosing a
26 better hard register are done on-the-fly during coloring. Register
27 coalescing and choosing a cheaper hard register is done by hard
28 register preferencing during hard register assigning. The live
29 range splitting is a byproduct of the regional register allocation.
31 Major IRA notions are:
33 o *Region* is a part of CFG where graph coloring based on
34 Chaitin-Briggs algorithm is done. IRA can work on any set of
35 nested CFG regions forming a tree. Currently the regions are
36 the entire function for the root region and natural loops for
37 the other regions. Therefore data structure representing a
38 region is called loop_tree_node.
40 o *Allocno class* is a register class used for allocation of
41 given allocno. It means that only hard register of given
42 register class can be assigned to given allocno. In reality,
43 even smaller subset of (*profitable*) hard registers can be
44 assigned. In rare cases, the subset can be even smaller
45 because our modification of Chaitin-Briggs algorithm requires
46 that sets of hard registers can be assigned to allocnos forms a
47 forest, i.e. the sets can be ordered in a way where any
48 previous set is not intersected with given set or is a superset
51 o *Pressure class* is a register class belonging to a set of
52 register classes containing all of the hard-registers available
53 for register allocation. The set of all pressure classes for a
54 target is defined in the corresponding machine-description file
55 according some criteria. Register pressure is calculated only
56 for pressure classes and it affects some IRA decisions as
57 forming allocation regions.
59 o *Allocno* represents the live range of a pseudo-register in a
60 region. Besides the obvious attributes like the corresponding
61 pseudo-register number, allocno class, conflicting allocnos and
62 conflicting hard-registers, there are a few allocno attributes
63 which are important for understanding the allocation algorithm:
65 - *Live ranges*. This is a list of ranges of *program points*
66 where the allocno lives. Program points represent places
67 where a pseudo can be born or become dead (there are
68 approximately two times more program points than the insns)
69 and they are represented by integers starting with 0. The
70 live ranges are used to find conflicts between allocnos.
71 They also play very important role for the transformation of
72 the IRA internal representation of several regions into a one
73 region representation. The later is used during the reload
74 pass work because each allocno represents all of the
75 corresponding pseudo-registers.
77 - *Hard-register costs*. This is a vector of size equal to the
78 number of available hard-registers of the allocno class. The
79 cost of a callee-clobbered hard-register for an allocno is
80 increased by the cost of save/restore code around the calls
81 through the given allocno's life. If the allocno is a move
82 instruction operand and another operand is a hard-register of
83 the allocno class, the cost of the hard-register is decreased
86 When an allocno is assigned, the hard-register with minimal
87 full cost is used. Initially, a hard-register's full cost is
88 the corresponding value from the hard-register's cost vector.
89 If the allocno is connected by a *copy* (see below) to
90 another allocno which has just received a hard-register, the
91 cost of the hard-register is decreased. Before choosing a
92 hard-register for an allocno, the allocno's current costs of
93 the hard-registers are modified by the conflict hard-register
94 costs of all of the conflicting allocnos which are not
97 - *Conflict hard-register costs*. This is a vector of the same
98 size as the hard-register costs vector. To permit an
99 unassigned allocno to get a better hard-register, IRA uses
100 this vector to calculate the final full cost of the
101 available hard-registers. Conflict hard-register costs of an
102 unassigned allocno are also changed with a change of the
103 hard-register cost of the allocno when a copy involving the
104 allocno is processed as described above. This is done to
105 show other unassigned allocnos that a given allocno prefers
106 some hard-registers in order to remove the move instruction
107 corresponding to the copy.
109 o *Cap*. If a pseudo-register does not live in a region but
110 lives in a nested region, IRA creates a special allocno called
111 a cap in the outer region. A region cap is also created for a
114 o *Copy*. Allocnos can be connected by copies. Copies are used
115 to modify hard-register costs for allocnos during coloring.
116 Such modifications reflects a preference to use the same
117 hard-register for the allocnos connected by copies. Usually
118 copies are created for move insns (in this case it results in
119 register coalescing). But IRA also creates copies for operands
120 of an insn which should be assigned to the same hard-register
121 due to constraints in the machine description (it usually
122 results in removing a move generated in reload to satisfy
123 the constraints) and copies referring to the allocno which is
124 the output operand of an instruction and the allocno which is
125 an input operand dying in the instruction (creation of such
126 copies results in less register shuffling). IRA *does not*
127 create copies between the same register allocnos from different
128 regions because we use another technique for propagating
129 hard-register preference on the borders of regions.
131 Allocnos (including caps) for the upper region in the region tree
132 *accumulate* information important for coloring from allocnos with
133 the same pseudo-register from nested regions. This includes
134 hard-register and memory costs, conflicts with hard-registers,
135 allocno conflicts, allocno copies and more. *Thus, attributes for
136 allocnos in a region have the same values as if the region had no
137 subregions*. It means that attributes for allocnos in the
138 outermost region corresponding to the function have the same values
139 as though the allocation used only one region which is the entire
140 function. It also means that we can look at IRA work as if the
141 first IRA did allocation for all function then it improved the
142 allocation for loops then their subloops and so on.
144 IRA major passes are:
146 o Building IRA internal representation which consists of the
149 * First, IRA builds regions and creates allocnos (file
150 ira-build.c) and initializes most of their attributes.
152 * Then IRA finds an allocno class for each allocno and
153 calculates its initial (non-accumulated) cost of memory and
154 each hard-register of its allocno class (file ira-cost.c).
156 * IRA creates live ranges of each allocno, calculates register
157 pressure for each pressure class in each region, sets up
158 conflict hard registers for each allocno and info about calls
159 the allocno lives through (file ira-lives.c).
161 * IRA removes low register pressure loops from the regions
162 mostly to speed IRA up (file ira-build.c).
164 * IRA propagates accumulated allocno info from lower region
165 allocnos to corresponding upper region allocnos (file
168 * IRA creates all caps (file ira-build.c).
170 * Having live-ranges of allocnos and their classes, IRA creates
171 conflicting allocnos for each allocno. Conflicting allocnos
172 are stored as a bit vector or array of pointers to the
173 conflicting allocnos whatever is more profitable (file
174 ira-conflicts.c). At this point IRA creates allocno copies.
176 o Coloring. Now IRA has all necessary info to start graph coloring
177 process. It is done in each region on top-down traverse of the
178 region tree (file ira-color.c). There are following subpasses:
180 * Finding profitable hard registers of corresponding allocno
181 class for each allocno. For example, only callee-saved hard
182 registers are frequently profitable for allocnos living
183 through colors. If the profitable hard register set of
184 allocno does not form a tree based on subset relation, we use
185 some approximation to form the tree. This approximation is
186 used to figure out trivial colorability of allocnos. The
187 approximation is a pretty rare case.
189 * Putting allocnos onto the coloring stack. IRA uses Briggs
190 optimistic coloring which is a major improvement over
191 Chaitin's coloring. Therefore IRA does not spill allocnos at
192 this point. There is some freedom in the order of putting
193 allocnos on the stack which can affect the final result of
194 the allocation. IRA uses some heuristics to improve the
195 order. The major one is to form *threads* from colorable
196 allocnos and push them on the stack by threads. Thread is a
197 set of non-conflicting colorable allocnos connected by
198 copies. The thread contains allocnos from the colorable
199 bucket or colorable allocnos already pushed onto the coloring
200 stack. Pushing thread allocnos one after another onto the
201 stack increases chances of removing copies when the allocnos
202 get the same hard reg.
204 We also use a modification of Chaitin-Briggs algorithm which
205 works for intersected register classes of allocnos. To
206 figure out trivial colorability of allocnos, the mentioned
207 above tree of hard register sets is used. To get an idea how
208 the algorithm works in i386 example, let us consider an
209 allocno to which any general hard register can be assigned.
210 If the allocno conflicts with eight allocnos to which only
211 EAX register can be assigned, given allocno is still
212 trivially colorable because all conflicting allocnos might be
213 assigned only to EAX and all other general hard registers are
216 To get an idea of the used trivial colorability criterion, it
217 is also useful to read article "Graph-Coloring Register
218 Allocation for Irregular Architectures" by Michael D. Smith
219 and Glen Holloway. Major difference between the article
220 approach and approach used in IRA is that Smith's approach
221 takes register classes only from machine description and IRA
222 calculate register classes from intermediate code too
223 (e.g. an explicit usage of hard registers in RTL code for
224 parameter passing can result in creation of additional
225 register classes which contain or exclude the hard
226 registers). That makes IRA approach useful for improving
227 coloring even for architectures with regular register files
228 and in fact some benchmarking shows the improvement for
229 regular class architectures is even bigger than for irregular
230 ones. Another difference is that Smith's approach chooses
231 intersection of classes of all insn operands in which a given
232 pseudo occurs. IRA can use bigger classes if it is still
233 more profitable than memory usage.
235 * Popping the allocnos from the stack and assigning them hard
236 registers. If IRA can not assign a hard register to an
237 allocno and the allocno is coalesced, IRA undoes the
238 coalescing and puts the uncoalesced allocnos onto the stack in
239 the hope that some such allocnos will get a hard register
240 separately. If IRA fails to assign hard register or memory
241 is more profitable for it, IRA spills the allocno. IRA
242 assigns the allocno the hard-register with minimal full
243 allocation cost which reflects the cost of usage of the
244 hard-register for the allocno and cost of usage of the
245 hard-register for allocnos conflicting with given allocno.
247 * Chaitin-Briggs coloring assigns as many pseudos as possible
248 to hard registers. After coloring we try to improve
249 allocation with cost point of view. We improve the
250 allocation by spilling some allocnos and assigning the freed
251 hard registers to other allocnos if it decreases the overall
254 * After allocno assigning in the region, IRA modifies the hard
255 register and memory costs for the corresponding allocnos in
256 the subregions to reflect the cost of possible loads, stores,
257 or moves on the border of the region and its subregions.
258 When default regional allocation algorithm is used
259 (-fira-algorithm=mixed), IRA just propagates the assignment
260 for allocnos if the register pressure in the region for the
261 corresponding pressure class is less than number of available
262 hard registers for given pressure class.
264 o Spill/restore code moving. When IRA performs an allocation
265 by traversing regions in top-down order, it does not know what
266 happens below in the region tree. Therefore, sometimes IRA
267 misses opportunities to perform a better allocation. A simple
268 optimization tries to improve allocation in a region having
269 subregions and containing in another region. If the
270 corresponding allocnos in the subregion are spilled, it spills
271 the region allocno if it is profitable. The optimization
272 implements a simple iterative algorithm performing profitable
273 transformations while they are still possible. It is fast in
274 practice, so there is no real need for a better time complexity
277 o Code change. After coloring, two allocnos representing the
278 same pseudo-register outside and inside a region respectively
279 may be assigned to different locations (hard-registers or
280 memory). In this case IRA creates and uses a new
281 pseudo-register inside the region and adds code to move allocno
282 values on the region's borders. This is done during top-down
283 traversal of the regions (file ira-emit.c). In some
284 complicated cases IRA can create a new allocno to move allocno
285 values (e.g. when a swap of values stored in two hard-registers
286 is needed). At this stage, the new allocno is marked as
287 spilled. IRA still creates the pseudo-register and the moves
288 on the region borders even when both allocnos were assigned to
289 the same hard-register. If the reload pass spills a
290 pseudo-register for some reason, the effect will be smaller
291 because another allocno will still be in the hard-register. In
292 most cases, this is better then spilling both allocnos. If
293 reload does not change the allocation for the two
294 pseudo-registers, the trivial move will be removed by
295 post-reload optimizations. IRA does not generate moves for
296 allocnos assigned to the same hard register when the default
297 regional allocation algorithm is used and the register pressure
298 in the region for the corresponding pressure class is less than
299 number of available hard registers for given pressure class.
300 IRA also does some optimizations to remove redundant stores and
301 to reduce code duplication on the region borders.
303 o Flattening internal representation. After changing code, IRA
304 transforms its internal representation for several regions into
305 one region representation (file ira-build.c). This process is
306 called IR flattening. Such process is more complicated than IR
307 rebuilding would be, but is much faster.
309 o After IR flattening, IRA tries to assign hard registers to all
310 spilled allocnos. This is implemented by a simple and fast
311 priority coloring algorithm (see function
312 ira_reassign_conflict_allocnos::ira-color.c). Here new allocnos
313 created during the code change pass can be assigned to hard
316 o At the end IRA calls the reload pass. The reload pass
317 communicates with IRA through several functions in file
318 ira-color.c to improve its decisions in
320 * sharing stack slots for the spilled pseudos based on IRA info
321 about pseudo-register conflicts.
323 * reassigning hard-registers to all spilled pseudos at the end
324 of each reload iteration.
326 * choosing a better hard-register to spill based on IRA info
327 about pseudo-register live ranges and the register pressure
328 in places where the pseudo-register lives.
330 IRA uses a lot of data representing the target processors. These
331 data are initialized in file ira.c.
333 If function has no loops (or the loops are ignored when
334 -fira-algorithm=CB is used), we have classic Chaitin-Briggs
335 coloring (only instead of separate pass of coalescing, we use hard
336 register preferencing). In such case, IRA works much faster
337 because many things are not made (like IR flattening, the
338 spill/restore optimization, and the code change).
340 Literature is worth to read for better understanding the code:
342 o Preston Briggs, Keith D. Cooper, Linda Torczon. Improvements to
343 Graph Coloring Register Allocation.
345 o David Callahan, Brian Koblenz. Register allocation via
346 hierarchical graph coloring.
348 o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
349 Coloring Register Allocation: A Study of the Chaitin-Briggs and
350 Callahan-Koblenz Algorithms.
352 o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
353 Register Allocation Based on Graph Fusion.
355 o Michael D. Smith and Glenn Holloway. Graph-Coloring Register
356 Allocation for Irregular Architectures
358 o Vladimir Makarov. The Integrated Register Allocator for GCC.
360 o Vladimir Makarov. The top-down register allocator for irregular
361 register file architectures.
368 #include "coretypes.h"
378 #include "hard-reg-set.h"
379 #include "basic-block.h"
384 #include "tree-pass.h"
388 #include "diagnostic-core.h"
390 #include "hash-set.h"
392 #include "machmode.h"
394 #include "function.h"
400 #include "rtl-iter.h"
401 #include "shrink-wrap.h"
403 struct target_ira default_target_ira
;
404 struct target_ira_int default_target_ira_int
;
405 #if SWITCHABLE_TARGET
406 struct target_ira
*this_target_ira
= &default_target_ira
;
407 struct target_ira_int
*this_target_ira_int
= &default_target_ira_int
;
410 /* A modified value of flag `-fira-verbose' used internally. */
411 int internal_flag_ira_verbose
;
413 /* Dump file of the allocator if it is not NULL. */
416 /* The number of elements in the following array. */
417 int ira_spilled_reg_stack_slots_num
;
419 /* The following array contains info about spilled pseudo-registers
420 stack slots used in current function so far. */
421 struct ira_spilled_reg_stack_slot
*ira_spilled_reg_stack_slots
;
423 /* Correspondingly overall cost of the allocation, overall cost before
424 reload, cost of the allocnos assigned to hard-registers, cost of
425 the allocnos assigned to memory, cost of loads, stores and register
426 move insns generated for pseudo-register live range splitting (see
428 int ira_overall_cost
, overall_cost_before
;
429 int ira_reg_cost
, ira_mem_cost
;
430 int ira_load_cost
, ira_store_cost
, ira_shuffle_cost
;
431 int ira_move_loops_num
, ira_additional_jumps_num
;
433 /* All registers that can be eliminated. */
435 HARD_REG_SET eliminable_regset
;
437 /* Value of max_reg_num () before IRA work start. This value helps
438 us to recognize a situation when new pseudos were created during
440 static int max_regno_before_ira
;
442 /* Temporary hard reg set used for a different calculation. */
443 static HARD_REG_SET temp_hard_regset
;
445 #define last_mode_for_init_move_cost \
446 (this_target_ira_int->x_last_mode_for_init_move_cost)
449 /* The function sets up the map IRA_REG_MODE_HARD_REGSET. */
451 setup_reg_mode_hard_regset (void)
453 int i
, m
, hard_regno
;
455 for (m
= 0; m
< NUM_MACHINE_MODES
; m
++)
456 for (hard_regno
= 0; hard_regno
< FIRST_PSEUDO_REGISTER
; hard_regno
++)
458 CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset
[hard_regno
][m
]);
459 for (i
= hard_regno_nregs
[hard_regno
][m
] - 1; i
>= 0; i
--)
460 if (hard_regno
+ i
< FIRST_PSEUDO_REGISTER
)
461 SET_HARD_REG_BIT (ira_reg_mode_hard_regset
[hard_regno
][m
],
467 #define no_unit_alloc_regs \
468 (this_target_ira_int->x_no_unit_alloc_regs)
470 /* The function sets up the three arrays declared above. */
472 setup_class_hard_regs (void)
474 int cl
, i
, hard_regno
, n
;
475 HARD_REG_SET processed_hard_reg_set
;
477 ira_assert (SHRT_MAX
>= FIRST_PSEUDO_REGISTER
);
478 for (cl
= (int) N_REG_CLASSES
- 1; cl
>= 0; cl
--)
480 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
481 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
482 CLEAR_HARD_REG_SET (processed_hard_reg_set
);
483 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
485 ira_non_ordered_class_hard_regs
[cl
][i
] = -1;
486 ira_class_hard_reg_index
[cl
][i
] = -1;
488 for (n
= 0, i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
490 #ifdef REG_ALLOC_ORDER
491 hard_regno
= reg_alloc_order
[i
];
495 if (TEST_HARD_REG_BIT (processed_hard_reg_set
, hard_regno
))
497 SET_HARD_REG_BIT (processed_hard_reg_set
, hard_regno
);
498 if (! TEST_HARD_REG_BIT (temp_hard_regset
, hard_regno
))
499 ira_class_hard_reg_index
[cl
][hard_regno
] = -1;
502 ira_class_hard_reg_index
[cl
][hard_regno
] = n
;
503 ira_class_hard_regs
[cl
][n
++] = hard_regno
;
506 ira_class_hard_regs_num
[cl
] = n
;
507 for (n
= 0, i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
508 if (TEST_HARD_REG_BIT (temp_hard_regset
, i
))
509 ira_non_ordered_class_hard_regs
[cl
][n
++] = i
;
510 ira_assert (ira_class_hard_regs_num
[cl
] == n
);
514 /* Set up global variables defining info about hard registers for the
515 allocation. These depend on USE_HARD_FRAME_P whose TRUE value means
516 that we can use the hard frame pointer for the allocation. */
518 setup_alloc_regs (bool use_hard_frame_p
)
520 #ifdef ADJUST_REG_ALLOC_ORDER
521 ADJUST_REG_ALLOC_ORDER
;
523 COPY_HARD_REG_SET (no_unit_alloc_regs
, fixed_reg_set
);
524 if (! use_hard_frame_p
)
525 SET_HARD_REG_BIT (no_unit_alloc_regs
, HARD_FRAME_POINTER_REGNUM
);
526 setup_class_hard_regs ();
531 #define alloc_reg_class_subclasses \
532 (this_target_ira_int->x_alloc_reg_class_subclasses)
534 /* Initialize the table of subclasses of each reg class. */
536 setup_reg_subclasses (void)
539 HARD_REG_SET temp_hard_regset2
;
541 for (i
= 0; i
< N_REG_CLASSES
; i
++)
542 for (j
= 0; j
< N_REG_CLASSES
; j
++)
543 alloc_reg_class_subclasses
[i
][j
] = LIM_REG_CLASSES
;
545 for (i
= 0; i
< N_REG_CLASSES
; i
++)
547 if (i
== (int) NO_REGS
)
550 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[i
]);
551 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
552 if (hard_reg_set_empty_p (temp_hard_regset
))
554 for (j
= 0; j
< N_REG_CLASSES
; j
++)
559 COPY_HARD_REG_SET (temp_hard_regset2
, reg_class_contents
[j
]);
560 AND_COMPL_HARD_REG_SET (temp_hard_regset2
, no_unit_alloc_regs
);
561 if (! hard_reg_set_subset_p (temp_hard_regset
,
564 p
= &alloc_reg_class_subclasses
[j
][0];
565 while (*p
!= LIM_REG_CLASSES
) p
++;
566 *p
= (enum reg_class
) i
;
573 /* Set up IRA_MEMORY_MOVE_COST and IRA_MAX_MEMORY_MOVE_COST. */
575 setup_class_subset_and_memory_move_costs (void)
577 int cl
, cl2
, mode
, cost
;
578 HARD_REG_SET temp_hard_regset2
;
580 for (mode
= 0; mode
< MAX_MACHINE_MODE
; mode
++)
581 ira_memory_move_cost
[mode
][NO_REGS
][0]
582 = ira_memory_move_cost
[mode
][NO_REGS
][1] = SHRT_MAX
;
583 for (cl
= (int) N_REG_CLASSES
- 1; cl
>= 0; cl
--)
585 if (cl
!= (int) NO_REGS
)
586 for (mode
= 0; mode
< MAX_MACHINE_MODE
; mode
++)
588 ira_max_memory_move_cost
[mode
][cl
][0]
589 = ira_memory_move_cost
[mode
][cl
][0]
590 = memory_move_cost ((enum machine_mode
) mode
,
591 (reg_class_t
) cl
, false);
592 ira_max_memory_move_cost
[mode
][cl
][1]
593 = ira_memory_move_cost
[mode
][cl
][1]
594 = memory_move_cost ((enum machine_mode
) mode
,
595 (reg_class_t
) cl
, true);
596 /* Costs for NO_REGS are used in cost calculation on the
597 1st pass when the preferred register classes are not
598 known yet. In this case we take the best scenario. */
599 if (ira_memory_move_cost
[mode
][NO_REGS
][0]
600 > ira_memory_move_cost
[mode
][cl
][0])
601 ira_max_memory_move_cost
[mode
][NO_REGS
][0]
602 = ira_memory_move_cost
[mode
][NO_REGS
][0]
603 = ira_memory_move_cost
[mode
][cl
][0];
604 if (ira_memory_move_cost
[mode
][NO_REGS
][1]
605 > ira_memory_move_cost
[mode
][cl
][1])
606 ira_max_memory_move_cost
[mode
][NO_REGS
][1]
607 = ira_memory_move_cost
[mode
][NO_REGS
][1]
608 = ira_memory_move_cost
[mode
][cl
][1];
611 for (cl
= (int) N_REG_CLASSES
- 1; cl
>= 0; cl
--)
612 for (cl2
= (int) N_REG_CLASSES
- 1; cl2
>= 0; cl2
--)
614 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
615 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
616 COPY_HARD_REG_SET (temp_hard_regset2
, reg_class_contents
[cl2
]);
617 AND_COMPL_HARD_REG_SET (temp_hard_regset2
, no_unit_alloc_regs
);
618 ira_class_subset_p
[cl
][cl2
]
619 = hard_reg_set_subset_p (temp_hard_regset
, temp_hard_regset2
);
620 if (! hard_reg_set_empty_p (temp_hard_regset2
)
621 && hard_reg_set_subset_p (reg_class_contents
[cl2
],
622 reg_class_contents
[cl
]))
623 for (mode
= 0; mode
< MAX_MACHINE_MODE
; mode
++)
625 cost
= ira_memory_move_cost
[mode
][cl2
][0];
626 if (cost
> ira_max_memory_move_cost
[mode
][cl
][0])
627 ira_max_memory_move_cost
[mode
][cl
][0] = cost
;
628 cost
= ira_memory_move_cost
[mode
][cl2
][1];
629 if (cost
> ira_max_memory_move_cost
[mode
][cl
][1])
630 ira_max_memory_move_cost
[mode
][cl
][1] = cost
;
633 for (cl
= (int) N_REG_CLASSES
- 1; cl
>= 0; cl
--)
634 for (mode
= 0; mode
< MAX_MACHINE_MODE
; mode
++)
636 ira_memory_move_cost
[mode
][cl
][0]
637 = ira_max_memory_move_cost
[mode
][cl
][0];
638 ira_memory_move_cost
[mode
][cl
][1]
639 = ira_max_memory_move_cost
[mode
][cl
][1];
641 setup_reg_subclasses ();
646 /* Define the following macro if allocation through malloc if
648 #define IRA_NO_OBSTACK
650 #ifndef IRA_NO_OBSTACK
651 /* Obstack used for storing all dynamic data (except bitmaps) of the
653 static struct obstack ira_obstack
;
656 /* Obstack used for storing all bitmaps of the IRA. */
657 static struct bitmap_obstack ira_bitmap_obstack
;
659 /* Allocate memory of size LEN for IRA data. */
661 ira_allocate (size_t len
)
665 #ifndef IRA_NO_OBSTACK
666 res
= obstack_alloc (&ira_obstack
, len
);
673 /* Free memory ADDR allocated for IRA data. */
675 ira_free (void *addr ATTRIBUTE_UNUSED
)
677 #ifndef IRA_NO_OBSTACK
685 /* Allocate and returns bitmap for IRA. */
687 ira_allocate_bitmap (void)
689 return BITMAP_ALLOC (&ira_bitmap_obstack
);
692 /* Free bitmap B allocated for IRA. */
694 ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED
)
701 /* Output information about allocation of all allocnos (except for
702 caps) into file F. */
704 ira_print_disposition (FILE *f
)
710 fprintf (f
, "Disposition:");
711 max_regno
= max_reg_num ();
712 for (n
= 0, i
= FIRST_PSEUDO_REGISTER
; i
< max_regno
; i
++)
713 for (a
= ira_regno_allocno_map
[i
];
715 a
= ALLOCNO_NEXT_REGNO_ALLOCNO (a
))
720 fprintf (f
, " %4d:r%-4d", ALLOCNO_NUM (a
), ALLOCNO_REGNO (a
));
721 if ((bb
= ALLOCNO_LOOP_TREE_NODE (a
)->bb
) != NULL
)
722 fprintf (f
, "b%-3d", bb
->index
);
724 fprintf (f
, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a
)->loop_num
);
725 if (ALLOCNO_HARD_REGNO (a
) >= 0)
726 fprintf (f
, " %3d", ALLOCNO_HARD_REGNO (a
));
733 /* Outputs information about allocation of all allocnos into
736 ira_debug_disposition (void)
738 ira_print_disposition (stderr
);
743 /* Set up ira_stack_reg_pressure_class which is the biggest pressure
744 register class containing stack registers or NO_REGS if there are
745 no stack registers. To find this class, we iterate through all
746 register pressure classes and choose the first register pressure
747 class containing all the stack registers and having the biggest
750 setup_stack_reg_pressure_class (void)
752 ira_stack_reg_pressure_class
= NO_REGS
;
757 HARD_REG_SET temp_hard_regset2
;
759 CLEAR_HARD_REG_SET (temp_hard_regset
);
760 for (i
= FIRST_STACK_REG
; i
<= LAST_STACK_REG
; i
++)
761 SET_HARD_REG_BIT (temp_hard_regset
, i
);
763 for (i
= 0; i
< ira_pressure_classes_num
; i
++)
765 cl
= ira_pressure_classes
[i
];
766 COPY_HARD_REG_SET (temp_hard_regset2
, temp_hard_regset
);
767 AND_HARD_REG_SET (temp_hard_regset2
, reg_class_contents
[cl
]);
768 size
= hard_reg_set_size (temp_hard_regset2
);
772 ira_stack_reg_pressure_class
= cl
;
779 /* Find pressure classes which are register classes for which we
780 calculate register pressure in IRA, register pressure sensitive
781 insn scheduling, and register pressure sensitive loop invariant
784 To make register pressure calculation easy, we always use
785 non-intersected register pressure classes. A move of hard
786 registers from one register pressure class is not more expensive
787 than load and store of the hard registers. Most likely an allocno
788 class will be a subset of a register pressure class and in many
789 cases a register pressure class. That makes usage of register
790 pressure classes a good approximation to find a high register
793 setup_pressure_classes (void)
795 int cost
, i
, n
, curr
;
797 enum reg_class pressure_classes
[N_REG_CLASSES
];
799 HARD_REG_SET temp_hard_regset2
;
803 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
805 if (ira_class_hard_regs_num
[cl
] == 0)
807 if (ira_class_hard_regs_num
[cl
] != 1
808 /* A register class without subclasses may contain a few
809 hard registers and movement between them is costly
810 (e.g. SPARC FPCC registers). We still should consider it
811 as a candidate for a pressure class. */
812 && alloc_reg_class_subclasses
[cl
][0] < cl
)
814 /* Check that the moves between any hard registers of the
815 current class are not more expensive for a legal mode
816 than load/store of the hard registers of the current
817 class. Such class is a potential candidate to be a
818 register pressure class. */
819 for (m
= 0; m
< NUM_MACHINE_MODES
; m
++)
821 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
822 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
823 AND_COMPL_HARD_REG_SET (temp_hard_regset
,
824 ira_prohibited_class_mode_regs
[cl
][m
]);
825 if (hard_reg_set_empty_p (temp_hard_regset
))
827 ira_init_register_move_cost_if_necessary ((enum machine_mode
) m
);
828 cost
= ira_register_move_cost
[m
][cl
][cl
];
829 if (cost
<= ira_max_memory_move_cost
[m
][cl
][1]
830 || cost
<= ira_max_memory_move_cost
[m
][cl
][0])
833 if (m
>= NUM_MACHINE_MODES
)
838 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
839 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
840 /* Remove so far added pressure classes which are subset of the
841 current candidate class. Prefer GENERAL_REGS as a pressure
842 register class to another class containing the same
843 allocatable hard registers. We do this because machine
844 dependent cost hooks might give wrong costs for the latter
845 class but always give the right cost for the former class
847 for (i
= 0; i
< n
; i
++)
849 cl2
= pressure_classes
[i
];
850 COPY_HARD_REG_SET (temp_hard_regset2
, reg_class_contents
[cl2
]);
851 AND_COMPL_HARD_REG_SET (temp_hard_regset2
, no_unit_alloc_regs
);
852 if (hard_reg_set_subset_p (temp_hard_regset
, temp_hard_regset2
)
853 && (! hard_reg_set_equal_p (temp_hard_regset
, temp_hard_regset2
)
854 || cl2
== (int) GENERAL_REGS
))
856 pressure_classes
[curr
++] = (enum reg_class
) cl2
;
860 if (hard_reg_set_subset_p (temp_hard_regset2
, temp_hard_regset
)
861 && (! hard_reg_set_equal_p (temp_hard_regset2
, temp_hard_regset
)
862 || cl
== (int) GENERAL_REGS
))
864 if (hard_reg_set_equal_p (temp_hard_regset2
, temp_hard_regset
))
866 pressure_classes
[curr
++] = (enum reg_class
) cl2
;
868 /* If the current candidate is a subset of a so far added
869 pressure class, don't add it to the list of the pressure
872 pressure_classes
[curr
++] = (enum reg_class
) cl
;
875 #ifdef ENABLE_IRA_CHECKING
877 HARD_REG_SET ignore_hard_regs
;
879 /* Check pressure classes correctness: here we check that hard
880 registers from all register pressure classes contains all hard
881 registers available for the allocation. */
882 CLEAR_HARD_REG_SET (temp_hard_regset
);
883 CLEAR_HARD_REG_SET (temp_hard_regset2
);
884 COPY_HARD_REG_SET (ignore_hard_regs
, no_unit_alloc_regs
);
885 for (cl
= 0; cl
< LIM_REG_CLASSES
; cl
++)
887 /* For some targets (like MIPS with MD_REGS), there are some
888 classes with hard registers available for allocation but
889 not able to hold value of any mode. */
890 for (m
= 0; m
< NUM_MACHINE_MODES
; m
++)
891 if (contains_reg_of_mode
[cl
][m
])
893 if (m
>= NUM_MACHINE_MODES
)
895 IOR_HARD_REG_SET (ignore_hard_regs
, reg_class_contents
[cl
]);
898 for (i
= 0; i
< n
; i
++)
899 if ((int) pressure_classes
[i
] == cl
)
901 IOR_HARD_REG_SET (temp_hard_regset2
, reg_class_contents
[cl
]);
903 IOR_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
905 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
906 /* Some targets (like SPARC with ICC reg) have allocatable regs
907 for which no reg class is defined. */
908 if (REGNO_REG_CLASS (i
) == NO_REGS
)
909 SET_HARD_REG_BIT (ignore_hard_regs
, i
);
910 AND_COMPL_HARD_REG_SET (temp_hard_regset
, ignore_hard_regs
);
911 AND_COMPL_HARD_REG_SET (temp_hard_regset2
, ignore_hard_regs
);
912 ira_assert (hard_reg_set_subset_p (temp_hard_regset2
, temp_hard_regset
));
915 ira_pressure_classes_num
= 0;
916 for (i
= 0; i
< n
; i
++)
918 cl
= (int) pressure_classes
[i
];
919 ira_reg_pressure_class_p
[cl
] = true;
920 ira_pressure_classes
[ira_pressure_classes_num
++] = (enum reg_class
) cl
;
922 setup_stack_reg_pressure_class ();
925 /* Set up IRA_UNIFORM_CLASS_P. Uniform class is a register class
926 whose register move cost between any registers of the class is the
927 same as for all its subclasses. We use the data to speed up the
928 2nd pass of calculations of allocno costs. */
930 setup_uniform_class_p (void)
934 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
936 ira_uniform_class_p
[cl
] = false;
937 if (ira_class_hard_regs_num
[cl
] == 0)
939 /* We can not use alloc_reg_class_subclasses here because move
940 cost hooks does not take into account that some registers are
941 unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
942 is element of alloc_reg_class_subclasses for GENERAL_REGS
943 because SSE regs are unavailable. */
944 for (i
= 0; (cl2
= reg_class_subclasses
[cl
][i
]) != LIM_REG_CLASSES
; i
++)
946 if (ira_class_hard_regs_num
[cl2
] == 0)
948 for (m
= 0; m
< NUM_MACHINE_MODES
; m
++)
949 if (contains_reg_of_mode
[cl
][m
] && contains_reg_of_mode
[cl2
][m
])
951 ira_init_register_move_cost_if_necessary ((enum machine_mode
) m
);
952 if (ira_register_move_cost
[m
][cl
][cl
]
953 != ira_register_move_cost
[m
][cl2
][cl2
])
956 if (m
< NUM_MACHINE_MODES
)
959 if (cl2
== LIM_REG_CLASSES
)
960 ira_uniform_class_p
[cl
] = true;
964 /* Set up IRA_ALLOCNO_CLASSES, IRA_ALLOCNO_CLASSES_NUM,
965 IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
967 Target may have many subtargets and not all target hard registers can
968 be used for allocation, e.g. x86 port in 32-bit mode can not use
969 hard registers introduced in x86-64 like r8-r15). Some classes
970 might have the same allocatable hard registers, e.g. INDEX_REGS
971 and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
972 calculations efforts we introduce allocno classes which contain
973 unique non-empty sets of allocatable hard-registers.
975 Pseudo class cost calculation in ira-costs.c is very expensive.
976 Therefore we are trying to decrease number of classes involved in
977 such calculation. Register classes used in the cost calculation
978 are called important classes. They are allocno classes and other
979 non-empty classes whose allocatable hard register sets are inside
980 of an allocno class hard register set. From the first sight, it
981 looks like that they are just allocno classes. It is not true. In
982 example of x86-port in 32-bit mode, allocno classes will contain
983 GENERAL_REGS but not LEGACY_REGS (because allocatable hard
984 registers are the same for the both classes). The important
985 classes will contain GENERAL_REGS and LEGACY_REGS. It is done
986 because a machine description insn constraint may refers for
987 LEGACY_REGS and code in ira-costs.c is mostly base on investigation
988 of the insn constraints. */
990 setup_allocno_and_important_classes (void)
994 HARD_REG_SET temp_hard_regset2
;
995 static enum reg_class classes
[LIM_REG_CLASSES
+ 1];
998 /* Collect classes which contain unique sets of allocatable hard
999 registers. Prefer GENERAL_REGS to other classes containing the
1000 same set of hard registers. */
1001 for (i
= 0; i
< LIM_REG_CLASSES
; i
++)
1003 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[i
]);
1004 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
1005 for (j
= 0; j
< n
; j
++)
1008 COPY_HARD_REG_SET (temp_hard_regset2
, reg_class_contents
[cl
]);
1009 AND_COMPL_HARD_REG_SET (temp_hard_regset2
,
1010 no_unit_alloc_regs
);
1011 if (hard_reg_set_equal_p (temp_hard_regset
,
1016 classes
[n
++] = (enum reg_class
) i
;
1017 else if (i
== GENERAL_REGS
)
1018 /* Prefer general regs. For i386 example, it means that
1019 we prefer GENERAL_REGS over INDEX_REGS or LEGACY_REGS
1020 (all of them consists of the same available hard
1022 classes
[j
] = (enum reg_class
) i
;
1024 classes
[n
] = LIM_REG_CLASSES
;
1026 /* Set up classes which can be used for allocnos as classes
1027 containing non-empty unique sets of allocatable hard
1029 ira_allocno_classes_num
= 0;
1030 for (i
= 0; (cl
= classes
[i
]) != LIM_REG_CLASSES
; i
++)
1031 if (ira_class_hard_regs_num
[cl
] > 0)
1032 ira_allocno_classes
[ira_allocno_classes_num
++] = (enum reg_class
) cl
;
1033 ira_important_classes_num
= 0;
1034 /* Add non-allocno classes containing to non-empty set of
1035 allocatable hard regs. */
1036 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1037 if (ira_class_hard_regs_num
[cl
] > 0)
1039 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
1040 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
1042 for (j
= 0; j
< ira_allocno_classes_num
; j
++)
1044 COPY_HARD_REG_SET (temp_hard_regset2
,
1045 reg_class_contents
[ira_allocno_classes
[j
]]);
1046 AND_COMPL_HARD_REG_SET (temp_hard_regset2
, no_unit_alloc_regs
);
1047 if ((enum reg_class
) cl
== ira_allocno_classes
[j
])
1049 else if (hard_reg_set_subset_p (temp_hard_regset
,
1053 if (set_p
&& j
>= ira_allocno_classes_num
)
1054 ira_important_classes
[ira_important_classes_num
++]
1055 = (enum reg_class
) cl
;
1057 /* Now add allocno classes to the important classes. */
1058 for (j
= 0; j
< ira_allocno_classes_num
; j
++)
1059 ira_important_classes
[ira_important_classes_num
++]
1060 = ira_allocno_classes
[j
];
1061 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1063 ira_reg_allocno_class_p
[cl
] = false;
1064 ira_reg_pressure_class_p
[cl
] = false;
1066 for (j
= 0; j
< ira_allocno_classes_num
; j
++)
1067 ira_reg_allocno_class_p
[ira_allocno_classes
[j
]] = true;
1068 setup_pressure_classes ();
1069 setup_uniform_class_p ();
1072 /* Setup translation in CLASS_TRANSLATE of all classes into a class
1073 given by array CLASSES of length CLASSES_NUM. The function is used
1074 make translation any reg class to an allocno class or to an
1075 pressure class. This translation is necessary for some
1076 calculations when we can use only allocno or pressure classes and
1077 such translation represents an approximate representation of all
1080 The translation in case when allocatable hard register set of a
1081 given class is subset of allocatable hard register set of a class
1082 in CLASSES is pretty simple. We use smallest classes from CLASSES
1083 containing a given class. If allocatable hard register set of a
1084 given class is not a subset of any corresponding set of a class
1085 from CLASSES, we use the cheapest (with load/store point of view)
1086 class from CLASSES whose set intersects with given class set. */
1088 setup_class_translate_array (enum reg_class
*class_translate
,
1089 int classes_num
, enum reg_class
*classes
)
1092 enum reg_class aclass
, best_class
, *cl_ptr
;
1093 int i
, cost
, min_cost
, best_cost
;
1095 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1096 class_translate
[cl
] = NO_REGS
;
1098 for (i
= 0; i
< classes_num
; i
++)
1100 aclass
= classes
[i
];
1101 for (cl_ptr
= &alloc_reg_class_subclasses
[aclass
][0];
1102 (cl
= *cl_ptr
) != LIM_REG_CLASSES
;
1104 if (class_translate
[cl
] == NO_REGS
)
1105 class_translate
[cl
] = aclass
;
1106 class_translate
[aclass
] = aclass
;
1108 /* For classes which are not fully covered by one of given classes
1109 (in other words covered by more one given class), use the
1111 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1113 if (cl
== NO_REGS
|| class_translate
[cl
] != NO_REGS
)
1115 best_class
= NO_REGS
;
1116 best_cost
= INT_MAX
;
1117 for (i
= 0; i
< classes_num
; i
++)
1119 aclass
= classes
[i
];
1120 COPY_HARD_REG_SET (temp_hard_regset
,
1121 reg_class_contents
[aclass
]);
1122 AND_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
1123 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
1124 if (! hard_reg_set_empty_p (temp_hard_regset
))
1127 for (mode
= 0; mode
< MAX_MACHINE_MODE
; mode
++)
1129 cost
= (ira_memory_move_cost
[mode
][aclass
][0]
1130 + ira_memory_move_cost
[mode
][aclass
][1]);
1131 if (min_cost
> cost
)
1134 if (best_class
== NO_REGS
|| best_cost
> min_cost
)
1136 best_class
= aclass
;
1137 best_cost
= min_cost
;
1141 class_translate
[cl
] = best_class
;
1145 /* Set up array IRA_ALLOCNO_CLASS_TRANSLATE and
1146 IRA_PRESSURE_CLASS_TRANSLATE. */
1148 setup_class_translate (void)
1150 setup_class_translate_array (ira_allocno_class_translate
,
1151 ira_allocno_classes_num
, ira_allocno_classes
);
1152 setup_class_translate_array (ira_pressure_class_translate
,
1153 ira_pressure_classes_num
, ira_pressure_classes
);
1156 /* Order numbers of allocno classes in original target allocno class
1157 array, -1 for non-allocno classes. */
1158 static int allocno_class_order
[N_REG_CLASSES
];
1160 /* The function used to sort the important classes. */
1162 comp_reg_classes_func (const void *v1p
, const void *v2p
)
1164 enum reg_class cl1
= *(const enum reg_class
*) v1p
;
1165 enum reg_class cl2
= *(const enum reg_class
*) v2p
;
1166 enum reg_class tcl1
, tcl2
;
1169 tcl1
= ira_allocno_class_translate
[cl1
];
1170 tcl2
= ira_allocno_class_translate
[cl2
];
1171 if (tcl1
!= NO_REGS
&& tcl2
!= NO_REGS
1172 && (diff
= allocno_class_order
[tcl1
] - allocno_class_order
[tcl2
]) != 0)
1174 return (int) cl1
- (int) cl2
;
1177 /* For correct work of function setup_reg_class_relation we need to
1178 reorder important classes according to the order of their allocno
1179 classes. It places important classes containing the same
1180 allocatable hard register set adjacent to each other and allocno
1181 class with the allocatable hard register set right after the other
1182 important classes with the same set.
1184 In example from comments of function
1185 setup_allocno_and_important_classes, it places LEGACY_REGS and
1186 GENERAL_REGS close to each other and GENERAL_REGS is after
1189 reorder_important_classes (void)
1193 for (i
= 0; i
< N_REG_CLASSES
; i
++)
1194 allocno_class_order
[i
] = -1;
1195 for (i
= 0; i
< ira_allocno_classes_num
; i
++)
1196 allocno_class_order
[ira_allocno_classes
[i
]] = i
;
1197 qsort (ira_important_classes
, ira_important_classes_num
,
1198 sizeof (enum reg_class
), comp_reg_classes_func
);
1199 for (i
= 0; i
< ira_important_classes_num
; i
++)
1200 ira_important_class_nums
[ira_important_classes
[i
]] = i
;
1203 /* Set up IRA_REG_CLASS_SUBUNION, IRA_REG_CLASS_SUPERUNION,
1204 IRA_REG_CLASS_SUPER_CLASSES, IRA_REG_CLASSES_INTERSECT, and
1205 IRA_REG_CLASSES_INTERSECT_P. For the meaning of the relations,
1206 please see corresponding comments in ira-int.h. */
1208 setup_reg_class_relations (void)
1210 int i
, cl1
, cl2
, cl3
;
1211 HARD_REG_SET intersection_set
, union_set
, temp_set2
;
1212 bool important_class_p
[N_REG_CLASSES
];
1214 memset (important_class_p
, 0, sizeof (important_class_p
));
1215 for (i
= 0; i
< ira_important_classes_num
; i
++)
1216 important_class_p
[ira_important_classes
[i
]] = true;
1217 for (cl1
= 0; cl1
< N_REG_CLASSES
; cl1
++)
1219 ira_reg_class_super_classes
[cl1
][0] = LIM_REG_CLASSES
;
1220 for (cl2
= 0; cl2
< N_REG_CLASSES
; cl2
++)
1222 ira_reg_classes_intersect_p
[cl1
][cl2
] = false;
1223 ira_reg_class_intersect
[cl1
][cl2
] = NO_REGS
;
1224 ira_reg_class_subset
[cl1
][cl2
] = NO_REGS
;
1225 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl1
]);
1226 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
1227 COPY_HARD_REG_SET (temp_set2
, reg_class_contents
[cl2
]);
1228 AND_COMPL_HARD_REG_SET (temp_set2
, no_unit_alloc_regs
);
1229 if (hard_reg_set_empty_p (temp_hard_regset
)
1230 && hard_reg_set_empty_p (temp_set2
))
1232 /* The both classes have no allocatable hard registers
1233 -- take all class hard registers into account and use
1234 reg_class_subunion and reg_class_superunion. */
1237 cl3
= reg_class_subclasses
[cl1
][i
];
1238 if (cl3
== LIM_REG_CLASSES
)
1240 if (reg_class_subset_p (ira_reg_class_intersect
[cl1
][cl2
],
1241 (enum reg_class
) cl3
))
1242 ira_reg_class_intersect
[cl1
][cl2
] = (enum reg_class
) cl3
;
1244 ira_reg_class_subunion
[cl1
][cl2
] = reg_class_subunion
[cl1
][cl2
];
1245 ira_reg_class_superunion
[cl1
][cl2
] = reg_class_superunion
[cl1
][cl2
];
1248 ira_reg_classes_intersect_p
[cl1
][cl2
]
1249 = hard_reg_set_intersect_p (temp_hard_regset
, temp_set2
);
1250 if (important_class_p
[cl1
] && important_class_p
[cl2
]
1251 && hard_reg_set_subset_p (temp_hard_regset
, temp_set2
))
1253 /* CL1 and CL2 are important classes and CL1 allocatable
1254 hard register set is inside of CL2 allocatable hard
1255 registers -- make CL1 a superset of CL2. */
1258 p
= &ira_reg_class_super_classes
[cl1
][0];
1259 while (*p
!= LIM_REG_CLASSES
)
1261 *p
++ = (enum reg_class
) cl2
;
1262 *p
= LIM_REG_CLASSES
;
1264 ira_reg_class_subunion
[cl1
][cl2
] = NO_REGS
;
1265 ira_reg_class_superunion
[cl1
][cl2
] = NO_REGS
;
1266 COPY_HARD_REG_SET (intersection_set
, reg_class_contents
[cl1
]);
1267 AND_HARD_REG_SET (intersection_set
, reg_class_contents
[cl2
]);
1268 AND_COMPL_HARD_REG_SET (intersection_set
, no_unit_alloc_regs
);
1269 COPY_HARD_REG_SET (union_set
, reg_class_contents
[cl1
]);
1270 IOR_HARD_REG_SET (union_set
, reg_class_contents
[cl2
]);
1271 AND_COMPL_HARD_REG_SET (union_set
, no_unit_alloc_regs
);
1272 for (cl3
= 0; cl3
< N_REG_CLASSES
; cl3
++)
1274 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl3
]);
1275 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
1276 if (hard_reg_set_subset_p (temp_hard_regset
, intersection_set
))
1278 /* CL3 allocatable hard register set is inside of
1279 intersection of allocatable hard register sets
1281 if (important_class_p
[cl3
])
1286 [(int) ira_reg_class_intersect
[cl1
][cl2
]]);
1287 AND_COMPL_HARD_REG_SET (temp_set2
, no_unit_alloc_regs
);
1288 if (! hard_reg_set_subset_p (temp_hard_regset
, temp_set2
)
1289 /* If the allocatable hard register sets are
1290 the same, prefer GENERAL_REGS or the
1291 smallest class for debugging
1293 || (hard_reg_set_equal_p (temp_hard_regset
, temp_set2
)
1294 && (cl3
== GENERAL_REGS
1295 || ((ira_reg_class_intersect
[cl1
][cl2
]
1297 && hard_reg_set_subset_p
1298 (reg_class_contents
[cl3
],
1301 ira_reg_class_intersect
[cl1
][cl2
]])))))
1302 ira_reg_class_intersect
[cl1
][cl2
] = (enum reg_class
) cl3
;
1306 reg_class_contents
[(int) ira_reg_class_subset
[cl1
][cl2
]]);
1307 AND_COMPL_HARD_REG_SET (temp_set2
, no_unit_alloc_regs
);
1308 if (! hard_reg_set_subset_p (temp_hard_regset
, temp_set2
)
1309 /* Ignore unavailable hard registers and prefer
1310 smallest class for debugging purposes. */
1311 || (hard_reg_set_equal_p (temp_hard_regset
, temp_set2
)
1312 && hard_reg_set_subset_p
1313 (reg_class_contents
[cl3
],
1315 [(int) ira_reg_class_subset
[cl1
][cl2
]])))
1316 ira_reg_class_subset
[cl1
][cl2
] = (enum reg_class
) cl3
;
1318 if (important_class_p
[cl3
]
1319 && hard_reg_set_subset_p (temp_hard_regset
, union_set
))
1321 /* CL3 allocatable hard register set is inside of
1322 union of allocatable hard register sets of CL1
1326 reg_class_contents
[(int) ira_reg_class_subunion
[cl1
][cl2
]]);
1327 AND_COMPL_HARD_REG_SET (temp_set2
, no_unit_alloc_regs
);
1328 if (ira_reg_class_subunion
[cl1
][cl2
] == NO_REGS
1329 || (hard_reg_set_subset_p (temp_set2
, temp_hard_regset
)
1331 && (! hard_reg_set_equal_p (temp_set2
,
1333 || cl3
== GENERAL_REGS
1334 /* If the allocatable hard register sets are the
1335 same, prefer GENERAL_REGS or the smallest
1336 class for debugging purposes. */
1337 || (ira_reg_class_subunion
[cl1
][cl2
] != GENERAL_REGS
1338 && hard_reg_set_subset_p
1339 (reg_class_contents
[cl3
],
1341 [(int) ira_reg_class_subunion
[cl1
][cl2
]])))))
1342 ira_reg_class_subunion
[cl1
][cl2
] = (enum reg_class
) cl3
;
1344 if (hard_reg_set_subset_p (union_set
, temp_hard_regset
))
1346 /* CL3 allocatable hard register set contains union
1347 of allocatable hard register sets of CL1 and
1351 reg_class_contents
[(int) ira_reg_class_superunion
[cl1
][cl2
]]);
1352 AND_COMPL_HARD_REG_SET (temp_set2
, no_unit_alloc_regs
);
1353 if (ira_reg_class_superunion
[cl1
][cl2
] == NO_REGS
1354 || (hard_reg_set_subset_p (temp_hard_regset
, temp_set2
)
1356 && (! hard_reg_set_equal_p (temp_set2
,
1358 || cl3
== GENERAL_REGS
1359 /* If the allocatable hard register sets are the
1360 same, prefer GENERAL_REGS or the smallest
1361 class for debugging purposes. */
1362 || (ira_reg_class_superunion
[cl1
][cl2
] != GENERAL_REGS
1363 && hard_reg_set_subset_p
1364 (reg_class_contents
[cl3
],
1366 [(int) ira_reg_class_superunion
[cl1
][cl2
]])))))
1367 ira_reg_class_superunion
[cl1
][cl2
] = (enum reg_class
) cl3
;
1374 /* Output all uniform and important classes into file F. */
1376 print_unform_and_important_classes (FILE *f
)
1378 static const char *const reg_class_names
[] = REG_CLASS_NAMES
;
1381 fprintf (f
, "Uniform classes:\n");
1382 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1383 if (ira_uniform_class_p
[cl
])
1384 fprintf (f
, " %s", reg_class_names
[cl
]);
1385 fprintf (f
, "\nImportant classes:\n");
1386 for (i
= 0; i
< ira_important_classes_num
; i
++)
1387 fprintf (f
, " %s", reg_class_names
[ira_important_classes
[i
]]);
1391 /* Output all possible allocno or pressure classes and their
1392 translation map into file F. */
1394 print_translated_classes (FILE *f
, bool pressure_p
)
1396 int classes_num
= (pressure_p
1397 ? ira_pressure_classes_num
: ira_allocno_classes_num
);
1398 enum reg_class
*classes
= (pressure_p
1399 ? ira_pressure_classes
: ira_allocno_classes
);
1400 enum reg_class
*class_translate
= (pressure_p
1401 ? ira_pressure_class_translate
1402 : ira_allocno_class_translate
);
1403 static const char *const reg_class_names
[] = REG_CLASS_NAMES
;
1406 fprintf (f
, "%s classes:\n", pressure_p
? "Pressure" : "Allocno");
1407 for (i
= 0; i
< classes_num
; i
++)
1408 fprintf (f
, " %s", reg_class_names
[classes
[i
]]);
1409 fprintf (f
, "\nClass translation:\n");
1410 for (i
= 0; i
< N_REG_CLASSES
; i
++)
1411 fprintf (f
, " %s -> %s\n", reg_class_names
[i
],
1412 reg_class_names
[class_translate
[i
]]);
1415 /* Output all possible allocno and translation classes and the
1416 translation maps into stderr. */
1418 ira_debug_allocno_classes (void)
1420 print_unform_and_important_classes (stderr
);
1421 print_translated_classes (stderr
, false);
1422 print_translated_classes (stderr
, true);
1425 /* Set up different arrays concerning class subsets, allocno and
1426 important classes. */
1428 find_reg_classes (void)
1430 setup_allocno_and_important_classes ();
1431 setup_class_translate ();
1432 reorder_important_classes ();
1433 setup_reg_class_relations ();
1438 /* Set up the array above. */
1440 setup_hard_regno_aclass (void)
1444 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
1447 ira_hard_regno_allocno_class
[i
]
1448 = (TEST_HARD_REG_BIT (no_unit_alloc_regs
, i
)
1450 : ira_allocno_class_translate
[REGNO_REG_CLASS (i
)]);
1454 ira_hard_regno_allocno_class
[i
] = NO_REGS
;
1455 for (j
= 0; j
< ira_allocno_classes_num
; j
++)
1457 cl
= ira_allocno_classes
[j
];
1458 if (ira_class_hard_reg_index
[cl
][i
] >= 0)
1460 ira_hard_regno_allocno_class
[i
] = cl
;
1470 /* Form IRA_REG_CLASS_MAX_NREGS and IRA_REG_CLASS_MIN_NREGS maps. */
1472 setup_reg_class_nregs (void)
1476 for (m
= 0; m
< MAX_MACHINE_MODE
; m
++)
1478 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1479 ira_reg_class_max_nregs
[cl
][m
]
1480 = ira_reg_class_min_nregs
[cl
][m
]
1481 = targetm
.class_max_nregs ((reg_class_t
) cl
, (enum machine_mode
) m
);
1482 for (cl
= 0; cl
< N_REG_CLASSES
; cl
++)
1484 (cl2
= alloc_reg_class_subclasses
[cl
][i
]) != LIM_REG_CLASSES
;
1486 if (ira_reg_class_min_nregs
[cl2
][m
]
1487 < ira_reg_class_min_nregs
[cl
][m
])
1488 ira_reg_class_min_nregs
[cl
][m
] = ira_reg_class_min_nregs
[cl2
][m
];
1494 /* Set up IRA_PROHIBITED_CLASS_MODE_REGS and IRA_CLASS_SINGLETON.
1495 This function is called once IRA_CLASS_HARD_REGS has been initialized. */
1497 setup_prohibited_class_mode_regs (void)
1499 int j
, k
, hard_regno
, cl
, last_hard_regno
, count
;
1501 for (cl
= (int) N_REG_CLASSES
- 1; cl
>= 0; cl
--)
1503 COPY_HARD_REG_SET (temp_hard_regset
, reg_class_contents
[cl
]);
1504 AND_COMPL_HARD_REG_SET (temp_hard_regset
, no_unit_alloc_regs
);
1505 for (j
= 0; j
< NUM_MACHINE_MODES
; j
++)
1508 last_hard_regno
= -1;
1509 CLEAR_HARD_REG_SET (ira_prohibited_class_mode_regs
[cl
][j
]);
1510 for (k
= ira_class_hard_regs_num
[cl
] - 1; k
>= 0; k
--)
1512 hard_regno
= ira_class_hard_regs
[cl
][k
];
1513 if (! HARD_REGNO_MODE_OK (hard_regno
, (enum machine_mode
) j
))
1514 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs
[cl
][j
],
1516 else if (in_hard_reg_set_p (temp_hard_regset
,
1517 (enum machine_mode
) j
, hard_regno
))
1519 last_hard_regno
= hard_regno
;
1523 ira_class_singleton
[cl
][j
] = (count
== 1 ? last_hard_regno
: -1);
1528 /* Clarify IRA_PROHIBITED_CLASS_MODE_REGS by excluding hard registers
1529 spanning from one register pressure class to another one. It is
1530 called after defining the pressure classes. */
1532 clarify_prohibited_class_mode_regs (void)
1534 int j
, k
, hard_regno
, cl
, pclass
, nregs
;
1536 for (cl
= (int) N_REG_CLASSES
- 1; cl
>= 0; cl
--)
1537 for (j
= 0; j
< NUM_MACHINE_MODES
; j
++)
1539 CLEAR_HARD_REG_SET (ira_useful_class_mode_regs
[cl
][j
]);
1540 for (k
= ira_class_hard_regs_num
[cl
] - 1; k
>= 0; k
--)
1542 hard_regno
= ira_class_hard_regs
[cl
][k
];
1543 if (TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs
[cl
][j
], hard_regno
))
1545 nregs
= hard_regno_nregs
[hard_regno
][j
];
1546 if (hard_regno
+ nregs
> FIRST_PSEUDO_REGISTER
)
1548 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs
[cl
][j
],
1552 pclass
= ira_pressure_class_translate
[REGNO_REG_CLASS (hard_regno
)];
1553 for (nregs
-- ;nregs
>= 0; nregs
--)
1554 if (((enum reg_class
) pclass
1555 != ira_pressure_class_translate
[REGNO_REG_CLASS
1556 (hard_regno
+ nregs
)]))
1558 SET_HARD_REG_BIT (ira_prohibited_class_mode_regs
[cl
][j
],
1562 if (!TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs
[cl
][j
],
1564 add_to_hard_reg_set (&ira_useful_class_mode_regs
[cl
][j
],
1565 (enum machine_mode
) j
, hard_regno
);
1570 /* Allocate and initialize IRA_REGISTER_MOVE_COST, IRA_MAY_MOVE_IN_COST
1571 and IRA_MAY_MOVE_OUT_COST for MODE. */
1573 ira_init_register_move_cost (enum machine_mode mode
)
1575 static unsigned short last_move_cost
[N_REG_CLASSES
][N_REG_CLASSES
];
1576 bool all_match
= true;
1577 unsigned int cl1
, cl2
;
1579 ira_assert (ira_register_move_cost
[mode
] == NULL
1580 && ira_may_move_in_cost
[mode
] == NULL
1581 && ira_may_move_out_cost
[mode
] == NULL
);
1582 ira_assert (have_regs_of_mode
[mode
]);
1583 for (cl1
= 0; cl1
< N_REG_CLASSES
; cl1
++)
1584 for (cl2
= 0; cl2
< N_REG_CLASSES
; cl2
++)
1587 if (!contains_reg_of_mode
[cl1
][mode
]
1588 || !contains_reg_of_mode
[cl2
][mode
])
1590 if ((ira_reg_class_max_nregs
[cl1
][mode
]
1591 > ira_class_hard_regs_num
[cl1
])
1592 || (ira_reg_class_max_nregs
[cl2
][mode
]
1593 > ira_class_hard_regs_num
[cl2
]))
1596 cost
= (ira_memory_move_cost
[mode
][cl1
][0]
1597 + ira_memory_move_cost
[mode
][cl2
][1]) * 2;
1601 cost
= register_move_cost (mode
, (enum reg_class
) cl1
,
1602 (enum reg_class
) cl2
);
1603 ira_assert (cost
< 65535);
1605 all_match
&= (last_move_cost
[cl1
][cl2
] == cost
);
1606 last_move_cost
[cl1
][cl2
] = cost
;
1608 if (all_match
&& last_mode_for_init_move_cost
!= -1)
1610 ira_register_move_cost
[mode
]
1611 = ira_register_move_cost
[last_mode_for_init_move_cost
];
1612 ira_may_move_in_cost
[mode
]
1613 = ira_may_move_in_cost
[last_mode_for_init_move_cost
];
1614 ira_may_move_out_cost
[mode
]
1615 = ira_may_move_out_cost
[last_mode_for_init_move_cost
];
1618 last_mode_for_init_move_cost
= mode
;
1619 ira_register_move_cost
[mode
] = XNEWVEC (move_table
, N_REG_CLASSES
);
1620 ira_may_move_in_cost
[mode
] = XNEWVEC (move_table
, N_REG_CLASSES
);
1621 ira_may_move_out_cost
[mode
] = XNEWVEC (move_table
, N_REG_CLASSES
);
1622 for (cl1
= 0; cl1
< N_REG_CLASSES
; cl1
++)
1623 for (cl2
= 0; cl2
< N_REG_CLASSES
; cl2
++)
1626 enum reg_class
*p1
, *p2
;
1628 if (last_move_cost
[cl1
][cl2
] == 65535)
1630 ira_register_move_cost
[mode
][cl1
][cl2
] = 65535;
1631 ira_may_move_in_cost
[mode
][cl1
][cl2
] = 65535;
1632 ira_may_move_out_cost
[mode
][cl1
][cl2
] = 65535;
1636 cost
= last_move_cost
[cl1
][cl2
];
1638 for (p2
= ®_class_subclasses
[cl2
][0];
1639 *p2
!= LIM_REG_CLASSES
; p2
++)
1640 if (ira_class_hard_regs_num
[*p2
] > 0
1641 && (ira_reg_class_max_nregs
[*p2
][mode
]
1642 <= ira_class_hard_regs_num
[*p2
]))
1643 cost
= MAX (cost
, ira_register_move_cost
[mode
][cl1
][*p2
]);
1645 for (p1
= ®_class_subclasses
[cl1
][0];
1646 *p1
!= LIM_REG_CLASSES
; p1
++)
1647 if (ira_class_hard_regs_num
[*p1
] > 0
1648 && (ira_reg_class_max_nregs
[*p1
][mode
]
1649 <= ira_class_hard_regs_num
[*p1
]))
1650 cost
= MAX (cost
, ira_register_move_cost
[mode
][*p1
][cl2
]);
1652 ira_assert (cost
<= 65535);
1653 ira_register_move_cost
[mode
][cl1
][cl2
] = cost
;
1655 if (ira_class_subset_p
[cl1
][cl2
])
1656 ira_may_move_in_cost
[mode
][cl1
][cl2
] = 0;
1658 ira_may_move_in_cost
[mode
][cl1
][cl2
] = cost
;
1660 if (ira_class_subset_p
[cl2
][cl1
])
1661 ira_may_move_out_cost
[mode
][cl1
][cl2
] = 0;
1663 ira_may_move_out_cost
[mode
][cl1
][cl2
] = cost
;
1670 /* This is called once during compiler work. It sets up
1671 different arrays whose values don't depend on the compiled
1674 ira_init_once (void)
1676 ira_init_costs_once ();
1680 /* Free ira_max_register_move_cost, ira_may_move_in_cost and
1681 ira_may_move_out_cost for each mode. */
1683 target_ira_int::free_register_move_costs (void)
1687 /* Reset move_cost and friends, making sure we only free shared
1688 table entries once. */
1689 for (mode
= 0; mode
< MAX_MACHINE_MODE
; mode
++)
1690 if (x_ira_register_move_cost
[mode
])
1693 i
< mode
&& (x_ira_register_move_cost
[i
]
1694 != x_ira_register_move_cost
[mode
]);
1699 free (x_ira_register_move_cost
[mode
]);
1700 free (x_ira_may_move_in_cost
[mode
]);
1701 free (x_ira_may_move_out_cost
[mode
]);
1704 memset (x_ira_register_move_cost
, 0, sizeof x_ira_register_move_cost
);
1705 memset (x_ira_may_move_in_cost
, 0, sizeof x_ira_may_move_in_cost
);
1706 memset (x_ira_may_move_out_cost
, 0, sizeof x_ira_may_move_out_cost
);
1707 last_mode_for_init_move_cost
= -1;
1710 target_ira_int::~target_ira_int ()
1713 free_register_move_costs ();
1716 /* This is called every time when register related information is
1721 this_target_ira_int
->free_register_move_costs ();
1722 setup_reg_mode_hard_regset ();
1723 setup_alloc_regs (flag_omit_frame_pointer
!= 0);
1724 setup_class_subset_and_memory_move_costs ();
1725 setup_reg_class_nregs ();
1726 setup_prohibited_class_mode_regs ();
1727 find_reg_classes ();
1728 clarify_prohibited_class_mode_regs ();
1729 setup_hard_regno_aclass ();
1734 #define ira_prohibited_mode_move_regs_initialized_p \
1735 (this_target_ira_int->x_ira_prohibited_mode_move_regs_initialized_p)
1737 /* Set up IRA_PROHIBITED_MODE_MOVE_REGS. */
1739 setup_prohibited_mode_move_regs (void)
1742 rtx test_reg1
, test_reg2
, move_pat
;
1743 rtx_insn
*move_insn
;
1745 if (ira_prohibited_mode_move_regs_initialized_p
)
1747 ira_prohibited_mode_move_regs_initialized_p
= true;
1748 test_reg1
= gen_rtx_REG (VOIDmode
, 0);
1749 test_reg2
= gen_rtx_REG (VOIDmode
, 0);
1750 move_pat
= gen_rtx_SET (VOIDmode
, test_reg1
, test_reg2
);
1751 move_insn
= gen_rtx_INSN (VOIDmode
, 0, 0, 0, move_pat
, 0, -1, 0);
1752 for (i
= 0; i
< NUM_MACHINE_MODES
; i
++)
1754 SET_HARD_REG_SET (ira_prohibited_mode_move_regs
[i
]);
1755 for (j
= 0; j
< FIRST_PSEUDO_REGISTER
; j
++)
1757 if (! HARD_REGNO_MODE_OK (j
, (enum machine_mode
) i
))
1759 SET_REGNO_RAW (test_reg1
, j
);
1760 PUT_MODE (test_reg1
, (enum machine_mode
) i
);
1761 SET_REGNO_RAW (test_reg2
, j
);
1762 PUT_MODE (test_reg2
, (enum machine_mode
) i
);
1763 INSN_CODE (move_insn
) = -1;
1764 recog_memoized (move_insn
);
1765 if (INSN_CODE (move_insn
) < 0)
1767 extract_insn (move_insn
);
1768 if (! constrain_operands (1))
1770 CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs
[i
], j
);
1777 /* Setup possible alternatives in ALTS for INSN. */
1779 ira_setup_alts (rtx_insn
*insn
, HARD_REG_SET
&alts
)
1781 /* MAP nalt * nop -> start of constraints for given operand and
1783 static vec
<const char *> insn_constraints
;
1788 int commutative
= -1;
1790 extract_insn (insn
);
1791 CLEAR_HARD_REG_SET (alts
);
1792 insn_constraints
.release ();
1793 insn_constraints
.safe_grow_cleared (recog_data
.n_operands
1794 * recog_data
.n_alternatives
+ 1);
1795 /* Check that the hard reg set is enough for holding all
1796 alternatives. It is hard to imagine the situation when the
1797 assertion is wrong. */
1798 ira_assert (recog_data
.n_alternatives
1799 <= (int) MAX (sizeof (HARD_REG_ELT_TYPE
) * CHAR_BIT
,
1800 FIRST_PSEUDO_REGISTER
));
1801 for (curr_swapped
= false;; curr_swapped
= true)
1803 /* Calculate some data common for all alternatives to speed up the
1805 for (nop
= 0; nop
< recog_data
.n_operands
; nop
++)
1807 for (nalt
= 0, p
= recog_data
.constraints
[nop
];
1808 nalt
< recog_data
.n_alternatives
;
1811 insn_constraints
[nop
* recog_data
.n_alternatives
+ nalt
] = p
;
1812 while (*p
&& *p
!= ',')
1818 for (nalt
= 0; nalt
< recog_data
.n_alternatives
; nalt
++)
1820 if (!TEST_BIT (recog_data
.enabled_alternatives
, nalt
)
1821 || TEST_HARD_REG_BIT (alts
, nalt
))
1824 for (nop
= 0; nop
< recog_data
.n_operands
; nop
++)
1828 op
= recog_data
.operand
[nop
];
1829 p
= insn_constraints
[nop
* recog_data
.n_alternatives
+ nalt
];
1830 if (*p
== 0 || *p
== ',')
1834 switch (c
= *p
, len
= CONSTRAINT_LEN (c
, p
), c
)
1844 /* We only support one commutative marker, the
1845 first one. We already set commutative
1847 if (commutative
< 0)
1851 case '0': case '1': case '2': case '3': case '4':
1852 case '5': case '6': case '7': case '8': case '9':
1862 enum constraint_num cn
= lookup_constraint (p
);
1863 switch (get_constraint_type (cn
))
1866 if (reg_class_for_constraint (cn
) != NO_REGS
)
1871 if (CONST_INT_P (op
)
1872 && (insn_const_int_ok_for_constraint
1882 if (constraint_satisfied_p (op
, cn
))
1889 while (p
+= len
, c
);
1894 if (nop
>= recog_data
.n_operands
)
1895 SET_HARD_REG_BIT (alts
, nalt
);
1897 if (commutative
< 0)
1901 op
= recog_data
.operand
[commutative
];
1902 recog_data
.operand
[commutative
] = recog_data
.operand
[commutative
+ 1];
1903 recog_data
.operand
[commutative
+ 1] = op
;
1908 /* Return the number of the output non-early clobber operand which
1909 should be the same in any case as operand with number OP_NUM (or
1910 negative value if there is no such operand). The function takes
1911 only really possible alternatives into consideration. */
1913 ira_get_dup_out_num (int op_num
, HARD_REG_SET
&alts
)
1915 int curr_alt
, c
, original
, dup
;
1916 bool ignore_p
, use_commut_op_p
;
1919 if (op_num
< 0 || recog_data
.n_alternatives
== 0)
1921 /* We should find duplications only for input operands. */
1922 if (recog_data
.operand_type
[op_num
] != OP_IN
)
1924 str
= recog_data
.constraints
[op_num
];
1925 use_commut_op_p
= false;
1928 rtx op
= recog_data
.operand
[op_num
];
1930 for (curr_alt
= 0, ignore_p
= !TEST_HARD_REG_BIT (alts
, curr_alt
),
1941 ignore_p
= !TEST_HARD_REG_BIT (alts
, curr_alt
);
1943 else if (! ignore_p
)
1950 enum constraint_num cn
= lookup_constraint (str
);
1951 enum reg_class cl
= reg_class_for_constraint (cn
);
1953 && !targetm
.class_likely_spilled_p (cl
))
1955 if (constraint_satisfied_p (op
, cn
))
1960 case '0': case '1': case '2': case '3': case '4':
1961 case '5': case '6': case '7': case '8': case '9':
1962 if (original
!= -1 && original
!= c
)
1967 str
+= CONSTRAINT_LEN (c
, str
);
1972 for (ignore_p
= false, str
= recog_data
.constraints
[original
- '0'];
1980 else if (*str
== '#')
1982 else if (! ignore_p
)
1985 dup
= original
- '0';
1986 /* It is better ignore an alternative with early clobber. */
1987 else if (*str
== '&')
1993 if (use_commut_op_p
)
1995 use_commut_op_p
= true;
1996 if (recog_data
.constraints
[op_num
][0] == '%')
1997 str
= recog_data
.constraints
[op_num
+ 1];
1998 else if (op_num
> 0 && recog_data
.constraints
[op_num
- 1][0] == '%')
1999 str
= recog_data
.constraints
[op_num
- 1];
2008 /* Search forward to see if the source register of a copy insn dies
2009 before either it or the destination register is modified, but don't
2010 scan past the end of the basic block. If so, we can replace the
2011 source with the destination and let the source die in the copy
2014 This will reduce the number of registers live in that range and may
2015 enable the destination and the source coalescing, thus often saving
2016 one register in addition to a register-register copy. */
2019 decrease_live_ranges_number (void)
2023 rtx set
, src
, dest
, dest_death
, q
, note
;
2027 if (! flag_expensive_optimizations
)
2031 fprintf (ira_dump_file
, "Starting decreasing number of live ranges...\n");
2033 FOR_EACH_BB_FN (bb
, cfun
)
2034 FOR_BB_INSNS (bb
, insn
)
2036 set
= single_set (insn
);
2039 src
= SET_SRC (set
);
2040 dest
= SET_DEST (set
);
2041 if (! REG_P (src
) || ! REG_P (dest
)
2042 || find_reg_note (insn
, REG_DEAD
, src
))
2044 sregno
= REGNO (src
);
2045 dregno
= REGNO (dest
);
2047 /* We don't want to mess with hard regs if register classes
2049 if (sregno
== dregno
2050 || (targetm
.small_register_classes_for_mode_p (GET_MODE (src
))
2051 && (sregno
< FIRST_PSEUDO_REGISTER
2052 || dregno
< FIRST_PSEUDO_REGISTER
))
2053 /* We don't see all updates to SP if they are in an
2054 auto-inc memory reference, so we must disallow this
2055 optimization on them. */
2056 || sregno
== STACK_POINTER_REGNUM
2057 || dregno
== STACK_POINTER_REGNUM
)
2060 dest_death
= NULL_RTX
;
2062 for (p
= NEXT_INSN (insn
); p
; p
= NEXT_INSN (p
))
2066 if (BLOCK_FOR_INSN (p
) != bb
)
2069 if (reg_set_p (src
, p
) || reg_set_p (dest
, p
)
2070 /* If SRC is an asm-declared register, it must not be
2071 replaced in any asm. Unfortunately, the REG_EXPR
2072 tree for the asm variable may be absent in the SRC
2073 rtx, so we can't check the actual register
2074 declaration easily (the asm operand will have it,
2075 though). To avoid complicating the test for a rare
2076 case, we just don't perform register replacement
2077 for a hard reg mentioned in an asm. */
2078 || (sregno
< FIRST_PSEUDO_REGISTER
2079 && asm_noperands (PATTERN (p
)) >= 0
2080 && reg_overlap_mentioned_p (src
, PATTERN (p
)))
2081 /* Don't change hard registers used by a call. */
2082 || (CALL_P (p
) && sregno
< FIRST_PSEUDO_REGISTER
2083 && find_reg_fusage (p
, USE
, src
))
2084 /* Don't change a USE of a register. */
2085 || (GET_CODE (PATTERN (p
)) == USE
2086 && reg_overlap_mentioned_p (src
, XEXP (PATTERN (p
), 0))))
2089 /* See if all of SRC dies in P. This test is slightly
2090 more conservative than it needs to be. */
2091 if ((note
= find_regno_note (p
, REG_DEAD
, sregno
))
2092 && GET_MODE (XEXP (note
, 0)) == GET_MODE (src
))
2096 /* We can do the optimization. Scan forward from INSN
2097 again, replacing regs as we go. Set FAILED if a
2098 replacement can't be done. In that case, we can't
2099 move the death note for SRC. This should be
2102 /* Set to stop at next insn. */
2103 for (q
= next_real_insn (insn
);
2104 q
!= next_real_insn (p
);
2105 q
= next_real_insn (q
))
2107 if (reg_overlap_mentioned_p (src
, PATTERN (q
)))
2109 /* If SRC is a hard register, we might miss
2110 some overlapping registers with
2111 validate_replace_rtx, so we would have to
2112 undo it. We can't if DEST is present in
2113 the insn, so fail in that combination of
2115 if (sregno
< FIRST_PSEUDO_REGISTER
2116 && reg_mentioned_p (dest
, PATTERN (q
)))
2119 /* Attempt to replace all uses. */
2120 else if (!validate_replace_rtx (src
, dest
, q
))
2123 /* If this succeeded, but some part of the
2124 register is still present, undo the
2126 else if (sregno
< FIRST_PSEUDO_REGISTER
2127 && reg_overlap_mentioned_p (src
, PATTERN (q
)))
2129 validate_replace_rtx (dest
, src
, q
);
2134 /* If DEST dies here, remove the death note and
2135 save it for later. Make sure ALL of DEST dies
2136 here; again, this is overly conservative. */
2138 && (dest_death
= find_regno_note (q
, REG_DEAD
, dregno
)))
2140 if (GET_MODE (XEXP (dest_death
, 0)) == GET_MODE (dest
))
2141 remove_note (q
, dest_death
);
2152 /* Move death note of SRC from P to INSN. */
2153 remove_note (p
, note
);
2154 XEXP (note
, 1) = REG_NOTES (insn
);
2155 REG_NOTES (insn
) = note
;
2158 /* DEST is also dead if INSN has a REG_UNUSED note for
2162 = find_regno_note (insn
, REG_UNUSED
, dregno
)))
2164 PUT_REG_NOTE_KIND (dest_death
, REG_DEAD
);
2165 remove_note (insn
, dest_death
);
2168 /* Put death note of DEST on P if we saw it die. */
2171 XEXP (dest_death
, 1) = REG_NOTES (p
);
2172 REG_NOTES (p
) = dest_death
;
2177 /* If SRC is a hard register which is set or killed in
2178 some other way, we can't do this optimization. */
2179 else if (sregno
< FIRST_PSEUDO_REGISTER
&& dead_or_set_p (p
, src
))
2187 /* Return nonzero if REGNO is a particularly bad choice for reloading X. */
2189 ira_bad_reload_regno_1 (int regno
, rtx x
)
2193 enum reg_class pref
;
2195 /* We only deal with pseudo regs. */
2196 if (! x
|| GET_CODE (x
) != REG
)
2199 x_regno
= REGNO (x
);
2200 if (x_regno
< FIRST_PSEUDO_REGISTER
)
2203 /* If the pseudo prefers REGNO explicitly, then do not consider
2204 REGNO a bad spill choice. */
2205 pref
= reg_preferred_class (x_regno
);
2206 if (reg_class_size
[pref
] == 1)
2207 return !TEST_HARD_REG_BIT (reg_class_contents
[pref
], regno
);
2209 /* If the pseudo conflicts with REGNO, then we consider REGNO a
2210 poor choice for a reload regno. */
2211 a
= ira_regno_allocno_map
[x_regno
];
2212 n
= ALLOCNO_NUM_OBJECTS (a
);
2213 for (i
= 0; i
< n
; i
++)
2215 ira_object_t obj
= ALLOCNO_OBJECT (a
, i
);
2216 if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj
), regno
))
2222 /* Return nonzero if REGNO is a particularly bad choice for reloading
2225 ira_bad_reload_regno (int regno
, rtx in
, rtx out
)
2227 return (ira_bad_reload_regno_1 (regno
, in
)
2228 || ira_bad_reload_regno_1 (regno
, out
));
2231 /* Add register clobbers from asm statements. */
2233 compute_regs_asm_clobbered (void)
2237 FOR_EACH_BB_FN (bb
, cfun
)
2240 FOR_BB_INSNS_REVERSE (bb
, insn
)
2244 if (NONDEBUG_INSN_P (insn
) && extract_asm_operands (PATTERN (insn
)))
2245 FOR_EACH_INSN_DEF (def
, insn
)
2247 unsigned int dregno
= DF_REF_REGNO (def
);
2248 if (HARD_REGISTER_NUM_P (dregno
))
2249 add_to_hard_reg_set (&crtl
->asm_clobbers
,
2250 GET_MODE (DF_REF_REAL_REG (def
)),
2258 /* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and
2261 ira_setup_eliminable_regset (void)
2263 #ifdef ELIMINABLE_REGS
2265 static const struct {const int from
, to
; } eliminables
[] = ELIMINABLE_REGS
;
2267 /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
2268 sp for alloca. So we can't eliminate the frame pointer in that
2269 case. At some point, we should improve this by emitting the
2270 sp-adjusting insns for this case. */
2271 frame_pointer_needed
2272 = (! flag_omit_frame_pointer
2273 || (cfun
->calls_alloca
&& EXIT_IGNORE_STACK
)
2274 /* We need the frame pointer to catch stack overflow exceptions
2275 if the stack pointer is moving. */
2276 || (flag_stack_check
&& STACK_CHECK_MOVING_SP
)
2277 || crtl
->accesses_prior_frames
2278 || (SUPPORTS_STACK_ALIGNMENT
&& crtl
->stack_realign_needed
)
2279 /* We need a frame pointer for all Cilk Plus functions that use
2281 || (flag_cilkplus
&& cfun
->is_cilk_function
)
2282 || targetm
.frame_pointer_required ());
2284 /* The chance that FRAME_POINTER_NEEDED is changed from inspecting
2285 RTL is very small. So if we use frame pointer for RA and RTL
2286 actually prevents this, we will spill pseudos assigned to the
2287 frame pointer in LRA. */
2289 if (frame_pointer_needed
)
2290 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM
, true);
2292 COPY_HARD_REG_SET (ira_no_alloc_regs
, no_unit_alloc_regs
);
2293 CLEAR_HARD_REG_SET (eliminable_regset
);
2295 compute_regs_asm_clobbered ();
2297 /* Build the regset of all eliminable registers and show we can't
2298 use those that we already know won't be eliminated. */
2299 #ifdef ELIMINABLE_REGS
2300 for (i
= 0; i
< (int) ARRAY_SIZE (eliminables
); i
++)
2303 = (! targetm
.can_eliminate (eliminables
[i
].from
, eliminables
[i
].to
)
2304 || (eliminables
[i
].to
== STACK_POINTER_REGNUM
&& frame_pointer_needed
));
2306 if (!TEST_HARD_REG_BIT (crtl
->asm_clobbers
, eliminables
[i
].from
))
2308 SET_HARD_REG_BIT (eliminable_regset
, eliminables
[i
].from
);
2311 SET_HARD_REG_BIT (ira_no_alloc_regs
, eliminables
[i
].from
);
2313 else if (cannot_elim
)
2314 error ("%s cannot be used in asm here",
2315 reg_names
[eliminables
[i
].from
]);
2317 df_set_regs_ever_live (eliminables
[i
].from
, true);
2319 #if !HARD_FRAME_POINTER_IS_FRAME_POINTER
2320 if (!TEST_HARD_REG_BIT (crtl
->asm_clobbers
, HARD_FRAME_POINTER_REGNUM
))
2322 SET_HARD_REG_BIT (eliminable_regset
, HARD_FRAME_POINTER_REGNUM
);
2323 if (frame_pointer_needed
)
2324 SET_HARD_REG_BIT (ira_no_alloc_regs
, HARD_FRAME_POINTER_REGNUM
);
2326 else if (frame_pointer_needed
)
2327 error ("%s cannot be used in asm here",
2328 reg_names
[HARD_FRAME_POINTER_REGNUM
]);
2330 df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM
, true);
2334 if (!TEST_HARD_REG_BIT (crtl
->asm_clobbers
, HARD_FRAME_POINTER_REGNUM
))
2336 SET_HARD_REG_BIT (eliminable_regset
, FRAME_POINTER_REGNUM
);
2337 if (frame_pointer_needed
)
2338 SET_HARD_REG_BIT (ira_no_alloc_regs
, FRAME_POINTER_REGNUM
);
2340 else if (frame_pointer_needed
)
2341 error ("%s cannot be used in asm here", reg_names
[FRAME_POINTER_REGNUM
]);
2343 df_set_regs_ever_live (FRAME_POINTER_REGNUM
, true);
2349 /* Vector of substitutions of register numbers,
2350 used to map pseudo regs into hardware regs.
2351 This is set up as a result of register allocation.
2352 Element N is the hard reg assigned to pseudo reg N,
2353 or is -1 if no hard reg was assigned.
2354 If N is a hard reg number, element N is N. */
2355 short *reg_renumber
;
2357 /* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
2358 the allocation found by IRA. */
2360 setup_reg_renumber (void)
2362 int regno
, hard_regno
;
2364 ira_allocno_iterator ai
;
2366 caller_save_needed
= 0;
2367 FOR_EACH_ALLOCNO (a
, ai
)
2369 if (ira_use_lra_p
&& ALLOCNO_CAP_MEMBER (a
) != NULL
)
2371 /* There are no caps at this point. */
2372 ira_assert (ALLOCNO_CAP_MEMBER (a
) == NULL
);
2373 if (! ALLOCNO_ASSIGNED_P (a
))
2374 /* It can happen if A is not referenced but partially anticipated
2375 somewhere in a region. */
2376 ALLOCNO_ASSIGNED_P (a
) = true;
2377 ira_free_allocno_updated_costs (a
);
2378 hard_regno
= ALLOCNO_HARD_REGNO (a
);
2379 regno
= ALLOCNO_REGNO (a
);
2380 reg_renumber
[regno
] = (hard_regno
< 0 ? -1 : hard_regno
);
2381 if (hard_regno
>= 0)
2384 enum reg_class pclass
;
2387 pclass
= ira_pressure_class_translate
[REGNO_REG_CLASS (hard_regno
)];
2388 nwords
= ALLOCNO_NUM_OBJECTS (a
);
2389 for (i
= 0; i
< nwords
; i
++)
2391 obj
= ALLOCNO_OBJECT (a
, i
);
2392 IOR_COMPL_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj
),
2393 reg_class_contents
[pclass
]);
2395 if (ALLOCNO_CALLS_CROSSED_NUM (a
) != 0
2396 && ira_hard_reg_set_intersection_p (hard_regno
, ALLOCNO_MODE (a
),
2399 ira_assert (!optimize
|| flag_caller_saves
2400 || (ALLOCNO_CALLS_CROSSED_NUM (a
)
2401 == ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a
))
2402 || regno
>= ira_reg_equiv_len
2403 || ira_equiv_no_lvalue_p (regno
));
2404 caller_save_needed
= 1;
2410 /* Set up allocno assignment flags for further allocation
2413 setup_allocno_assignment_flags (void)
2417 ira_allocno_iterator ai
;
2419 FOR_EACH_ALLOCNO (a
, ai
)
2421 if (! ALLOCNO_ASSIGNED_P (a
))
2422 /* It can happen if A is not referenced but partially anticipated
2423 somewhere in a region. */
2424 ira_free_allocno_updated_costs (a
);
2425 hard_regno
= ALLOCNO_HARD_REGNO (a
);
2426 /* Don't assign hard registers to allocnos which are destination
2427 of removed store at the end of loop. It has no sense to keep
2428 the same value in different hard registers. It is also
2429 impossible to assign hard registers correctly to such
2430 allocnos because the cost info and info about intersected
2431 calls are incorrect for them. */
2432 ALLOCNO_ASSIGNED_P (a
) = (hard_regno
>= 0
2433 || ALLOCNO_EMIT_DATA (a
)->mem_optimized_dest_p
2434 || (ALLOCNO_MEMORY_COST (a
)
2435 - ALLOCNO_CLASS_COST (a
)) < 0);
2438 || ira_hard_reg_in_set_p (hard_regno
, ALLOCNO_MODE (a
),
2439 reg_class_contents
[ALLOCNO_CLASS (a
)]));
2443 /* Evaluate overall allocation cost and the costs for using hard
2444 registers and memory for allocnos. */
2446 calculate_allocation_cost (void)
2448 int hard_regno
, cost
;
2450 ira_allocno_iterator ai
;
2452 ira_overall_cost
= ira_reg_cost
= ira_mem_cost
= 0;
2453 FOR_EACH_ALLOCNO (a
, ai
)
2455 hard_regno
= ALLOCNO_HARD_REGNO (a
);
2456 ira_assert (hard_regno
< 0
2457 || (ira_hard_reg_in_set_p
2458 (hard_regno
, ALLOCNO_MODE (a
),
2459 reg_class_contents
[ALLOCNO_CLASS (a
)])));
2462 cost
= ALLOCNO_MEMORY_COST (a
);
2463 ira_mem_cost
+= cost
;
2465 else if (ALLOCNO_HARD_REG_COSTS (a
) != NULL
)
2467 cost
= (ALLOCNO_HARD_REG_COSTS (a
)
2468 [ira_class_hard_reg_index
2469 [ALLOCNO_CLASS (a
)][hard_regno
]]);
2470 ira_reg_cost
+= cost
;
2474 cost
= ALLOCNO_CLASS_COST (a
);
2475 ira_reg_cost
+= cost
;
2477 ira_overall_cost
+= cost
;
2480 if (internal_flag_ira_verbose
> 0 && ira_dump_file
!= NULL
)
2482 fprintf (ira_dump_file
,
2483 "+++Costs: overall %d, reg %d, mem %d, ld %d, st %d, move %d\n",
2484 ira_overall_cost
, ira_reg_cost
, ira_mem_cost
,
2485 ira_load_cost
, ira_store_cost
, ira_shuffle_cost
);
2486 fprintf (ira_dump_file
, "+++ move loops %d, new jumps %d\n",
2487 ira_move_loops_num
, ira_additional_jumps_num
);
2492 #ifdef ENABLE_IRA_CHECKING
2493 /* Check the correctness of the allocation. We do need this because
2494 of complicated code to transform more one region internal
2495 representation into one region representation. */
2497 check_allocation (void)
2500 int hard_regno
, nregs
, conflict_nregs
;
2501 ira_allocno_iterator ai
;
2503 FOR_EACH_ALLOCNO (a
, ai
)
2505 int n
= ALLOCNO_NUM_OBJECTS (a
);
2508 if (ALLOCNO_CAP_MEMBER (a
) != NULL
2509 || (hard_regno
= ALLOCNO_HARD_REGNO (a
)) < 0)
2511 nregs
= hard_regno_nregs
[hard_regno
][ALLOCNO_MODE (a
)];
2513 /* We allocated a single hard register. */
2516 /* We allocated multiple hard registers, and we will test
2517 conflicts in a granularity of single hard regs. */
2520 for (i
= 0; i
< n
; i
++)
2522 ira_object_t obj
= ALLOCNO_OBJECT (a
, i
);
2523 ira_object_t conflict_obj
;
2524 ira_object_conflict_iterator oci
;
2525 int this_regno
= hard_regno
;
2528 if (REG_WORDS_BIG_ENDIAN
)
2529 this_regno
+= n
- i
- 1;
2533 FOR_EACH_OBJECT_CONFLICT (obj
, conflict_obj
, oci
)
2535 ira_allocno_t conflict_a
= OBJECT_ALLOCNO (conflict_obj
);
2536 int conflict_hard_regno
= ALLOCNO_HARD_REGNO (conflict_a
);
2537 if (conflict_hard_regno
< 0)
2542 [conflict_hard_regno
][ALLOCNO_MODE (conflict_a
)]);
2544 if (ALLOCNO_NUM_OBJECTS (conflict_a
) > 1
2545 && conflict_nregs
== ALLOCNO_NUM_OBJECTS (conflict_a
))
2547 if (REG_WORDS_BIG_ENDIAN
)
2548 conflict_hard_regno
+= (ALLOCNO_NUM_OBJECTS (conflict_a
)
2549 - OBJECT_SUBWORD (conflict_obj
) - 1);
2551 conflict_hard_regno
+= OBJECT_SUBWORD (conflict_obj
);
2555 if ((conflict_hard_regno
<= this_regno
2556 && this_regno
< conflict_hard_regno
+ conflict_nregs
)
2557 || (this_regno
<= conflict_hard_regno
2558 && conflict_hard_regno
< this_regno
+ nregs
))
2560 fprintf (stderr
, "bad allocation for %d and %d\n",
2561 ALLOCNO_REGNO (a
), ALLOCNO_REGNO (conflict_a
));
2570 /* Allocate REG_EQUIV_INIT. Set up it from IRA_REG_EQUIV which should
2571 be already calculated. */
2573 setup_reg_equiv_init (void)
2576 int max_regno
= max_reg_num ();
2578 for (i
= 0; i
< max_regno
; i
++)
2579 reg_equiv_init (i
) = ira_reg_equiv
[i
].init_insns
;
2582 /* Update equiv regno from movement of FROM_REGNO to TO_REGNO. INSNS
2583 are insns which were generated for such movement. It is assumed
2584 that FROM_REGNO and TO_REGNO always have the same value at the
2585 point of any move containing such registers. This function is used
2586 to update equiv info for register shuffles on the region borders
2587 and for caller save/restore insns. */
2589 ira_update_equiv_info_by_shuffle_insn (int to_regno
, int from_regno
, rtx_insn
*insns
)
2594 if (! ira_reg_equiv
[from_regno
].defined_p
2595 && (! ira_reg_equiv
[to_regno
].defined_p
2596 || ((x
= ira_reg_equiv
[to_regno
].memory
) != NULL_RTX
2597 && ! MEM_READONLY_P (x
))))
2600 if (NEXT_INSN (insn
) != NULL_RTX
)
2602 if (! ira_reg_equiv
[to_regno
].defined_p
)
2604 ira_assert (ira_reg_equiv
[to_regno
].init_insns
== NULL_RTX
);
2607 ira_reg_equiv
[to_regno
].defined_p
= false;
2608 ira_reg_equiv
[to_regno
].memory
2609 = ira_reg_equiv
[to_regno
].constant
2610 = ira_reg_equiv
[to_regno
].invariant
2611 = ira_reg_equiv
[to_regno
].init_insns
= NULL
;
2612 if (internal_flag_ira_verbose
> 3 && ira_dump_file
!= NULL
)
2613 fprintf (ira_dump_file
,
2614 " Invalidating equiv info for reg %d\n", to_regno
);
2617 /* It is possible that FROM_REGNO still has no equivalence because
2618 in shuffles to_regno<-from_regno and from_regno<-to_regno the 2nd
2619 insn was not processed yet. */
2620 if (ira_reg_equiv
[from_regno
].defined_p
)
2622 ira_reg_equiv
[to_regno
].defined_p
= true;
2623 if ((x
= ira_reg_equiv
[from_regno
].memory
) != NULL_RTX
)
2625 ira_assert (ira_reg_equiv
[from_regno
].invariant
== NULL_RTX
2626 && ira_reg_equiv
[from_regno
].constant
== NULL_RTX
);
2627 ira_assert (ira_reg_equiv
[to_regno
].memory
== NULL_RTX
2628 || rtx_equal_p (ira_reg_equiv
[to_regno
].memory
, x
));
2629 ira_reg_equiv
[to_regno
].memory
= x
;
2630 if (! MEM_READONLY_P (x
))
2631 /* We don't add the insn to insn init list because memory
2632 equivalence is just to say what memory is better to use
2633 when the pseudo is spilled. */
2636 else if ((x
= ira_reg_equiv
[from_regno
].constant
) != NULL_RTX
)
2638 ira_assert (ira_reg_equiv
[from_regno
].invariant
== NULL_RTX
);
2639 ira_assert (ira_reg_equiv
[to_regno
].constant
== NULL_RTX
2640 || rtx_equal_p (ira_reg_equiv
[to_regno
].constant
, x
));
2641 ira_reg_equiv
[to_regno
].constant
= x
;
2645 x
= ira_reg_equiv
[from_regno
].invariant
;
2646 ira_assert (x
!= NULL_RTX
);
2647 ira_assert (ira_reg_equiv
[to_regno
].invariant
== NULL_RTX
2648 || rtx_equal_p (ira_reg_equiv
[to_regno
].invariant
, x
));
2649 ira_reg_equiv
[to_regno
].invariant
= x
;
2651 if (find_reg_note (insn
, REG_EQUIV
, x
) == NULL_RTX
)
2653 note
= set_unique_reg_note (insn
, REG_EQUIV
, x
);
2654 gcc_assert (note
!= NULL_RTX
);
2655 if (internal_flag_ira_verbose
> 3 && ira_dump_file
!= NULL
)
2657 fprintf (ira_dump_file
,
2658 " Adding equiv note to insn %u for reg %d ",
2659 INSN_UID (insn
), to_regno
);
2660 dump_value_slim (ira_dump_file
, x
, 1);
2661 fprintf (ira_dump_file
, "\n");
2665 ira_reg_equiv
[to_regno
].init_insns
2666 = gen_rtx_INSN_LIST (VOIDmode
, insn
,
2667 ira_reg_equiv
[to_regno
].init_insns
);
2668 if (internal_flag_ira_verbose
> 3 && ira_dump_file
!= NULL
)
2669 fprintf (ira_dump_file
,
2670 " Adding equiv init move insn %u to reg %d\n",
2671 INSN_UID (insn
), to_regno
);
2674 /* Fix values of array REG_EQUIV_INIT after live range splitting done
2677 fix_reg_equiv_init (void)
2679 int max_regno
= max_reg_num ();
2680 int i
, new_regno
, max
;
2681 rtx x
, prev
, next
, insn
, set
;
2683 if (max_regno_before_ira
< max_regno
)
2685 max
= vec_safe_length (reg_equivs
);
2687 for (i
= FIRST_PSEUDO_REGISTER
; i
< max
; i
++)
2688 for (prev
= NULL_RTX
, x
= reg_equiv_init (i
);
2694 set
= single_set (as_a
<rtx_insn
*> (insn
));
2695 ira_assert (set
!= NULL_RTX
2696 && (REG_P (SET_DEST (set
)) || REG_P (SET_SRC (set
))));
2697 if (REG_P (SET_DEST (set
))
2698 && ((int) REGNO (SET_DEST (set
)) == i
2699 || (int) ORIGINAL_REGNO (SET_DEST (set
)) == i
))
2700 new_regno
= REGNO (SET_DEST (set
));
2701 else if (REG_P (SET_SRC (set
))
2702 && ((int) REGNO (SET_SRC (set
)) == i
2703 || (int) ORIGINAL_REGNO (SET_SRC (set
)) == i
))
2704 new_regno
= REGNO (SET_SRC (set
));
2711 /* Remove the wrong list element. */
2712 if (prev
== NULL_RTX
)
2713 reg_equiv_init (i
) = next
;
2715 XEXP (prev
, 1) = next
;
2716 XEXP (x
, 1) = reg_equiv_init (new_regno
);
2717 reg_equiv_init (new_regno
) = x
;
2723 #ifdef ENABLE_IRA_CHECKING
2724 /* Print redundant memory-memory copies. */
2726 print_redundant_copies (void)
2730 ira_copy_t cp
, next_cp
;
2731 ira_allocno_iterator ai
;
2733 FOR_EACH_ALLOCNO (a
, ai
)
2735 if (ALLOCNO_CAP_MEMBER (a
) != NULL
)
2738 hard_regno
= ALLOCNO_HARD_REGNO (a
);
2739 if (hard_regno
>= 0)
2741 for (cp
= ALLOCNO_COPIES (a
); cp
!= NULL
; cp
= next_cp
)
2743 next_cp
= cp
->next_first_allocno_copy
;
2746 next_cp
= cp
->next_second_allocno_copy
;
2747 if (internal_flag_ira_verbose
> 4 && ira_dump_file
!= NULL
2748 && cp
->insn
!= NULL_RTX
2749 && ALLOCNO_HARD_REGNO (cp
->first
) == hard_regno
)
2750 fprintf (ira_dump_file
,
2751 " Redundant move from %d(freq %d):%d\n",
2752 INSN_UID (cp
->insn
), cp
->freq
, hard_regno
);
2758 /* Setup preferred and alternative classes for new pseudo-registers
2759 created by IRA starting with START. */
2761 setup_preferred_alternate_classes_for_new_pseudos (int start
)
2764 int max_regno
= max_reg_num ();
2766 for (i
= start
; i
< max_regno
; i
++)
2768 old_regno
= ORIGINAL_REGNO (regno_reg_rtx
[i
]);
2769 ira_assert (i
!= old_regno
);
2770 setup_reg_classes (i
, reg_preferred_class (old_regno
),
2771 reg_alternate_class (old_regno
),
2772 reg_allocno_class (old_regno
));
2773 if (internal_flag_ira_verbose
> 2 && ira_dump_file
!= NULL
)
2774 fprintf (ira_dump_file
,
2775 " New r%d: setting preferred %s, alternative %s\n",
2776 i
, reg_class_names
[reg_preferred_class (old_regno
)],
2777 reg_class_names
[reg_alternate_class (old_regno
)]);
2782 /* The number of entries allocated in reg_info. */
2783 static int allocated_reg_info_size
;
2785 /* Regional allocation can create new pseudo-registers. This function
2786 expands some arrays for pseudo-registers. */
2788 expand_reg_info (void)
2791 int size
= max_reg_num ();
2794 for (i
= allocated_reg_info_size
; i
< size
; i
++)
2795 setup_reg_classes (i
, GENERAL_REGS
, ALL_REGS
, GENERAL_REGS
);
2796 setup_preferred_alternate_classes_for_new_pseudos (allocated_reg_info_size
);
2797 allocated_reg_info_size
= size
;
2800 /* Return TRUE if there is too high register pressure in the function.
2801 It is used to decide when stack slot sharing is worth to do. */
2803 too_high_register_pressure_p (void)
2806 enum reg_class pclass
;
2808 for (i
= 0; i
< ira_pressure_classes_num
; i
++)
2810 pclass
= ira_pressure_classes
[i
];
2811 if (ira_loop_tree_root
->reg_pressure
[pclass
] > 10000)
2819 /* Indicate that hard register number FROM was eliminated and replaced with
2820 an offset from hard register number TO. The status of hard registers live
2821 at the start of a basic block is updated by replacing a use of FROM with
2825 mark_elimination (int from
, int to
)
2830 FOR_EACH_BB_FN (bb
, cfun
)
2833 if (bitmap_bit_p (r
, from
))
2835 bitmap_clear_bit (r
, from
);
2836 bitmap_set_bit (r
, to
);
2840 r
= DF_LIVE_IN (bb
);
2841 if (bitmap_bit_p (r
, from
))
2843 bitmap_clear_bit (r
, from
);
2844 bitmap_set_bit (r
, to
);
2851 /* The length of the following array. */
2852 int ira_reg_equiv_len
;
2854 /* Info about equiv. info for each register. */
2855 struct ira_reg_equiv_s
*ira_reg_equiv
;
2857 /* Expand ira_reg_equiv if necessary. */
2859 ira_expand_reg_equiv (void)
2861 int old
= ira_reg_equiv_len
;
2863 if (ira_reg_equiv_len
> max_reg_num ())
2865 ira_reg_equiv_len
= max_reg_num () * 3 / 2 + 1;
2867 = (struct ira_reg_equiv_s
*) xrealloc (ira_reg_equiv
,
2869 * sizeof (struct ira_reg_equiv_s
));
2870 gcc_assert (old
< ira_reg_equiv_len
);
2871 memset (ira_reg_equiv
+ old
, 0,
2872 sizeof (struct ira_reg_equiv_s
) * (ira_reg_equiv_len
- old
));
2876 init_reg_equiv (void)
2878 ira_reg_equiv_len
= 0;
2879 ira_reg_equiv
= NULL
;
2880 ira_expand_reg_equiv ();
2884 finish_reg_equiv (void)
2886 free (ira_reg_equiv
);
2893 /* Set when a REG_EQUIV note is found or created. Use to
2894 keep track of what memory accesses might be created later,
2899 /* The list of each instruction which initializes this register.
2901 NULL indicates we know nothing about this register's equivalence
2904 An INSN_LIST with a NULL insn indicates this pseudo is already
2905 known to not have a valid equivalence. */
2906 rtx_insn_list
*init_insns
;
2908 /* Loop depth is used to recognize equivalences which appear
2909 to be present within the same loop (or in an inner loop). */
2911 /* Nonzero if this had a preexisting REG_EQUIV note. */
2912 unsigned char is_arg_equivalence
: 1;
2913 /* Set when an attempt should be made to replace a register
2914 with the associated src_p entry. */
2915 unsigned char replace
: 1;
2916 /* Set if this register has no known equivalence. */
2917 unsigned char no_equiv
: 1;
2920 /* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
2921 structure for that register. */
2922 static struct equivalence
*reg_equiv
;
2924 /* Used for communication between the following two functions: contains
2925 a MEM that we wish to ensure remains unchanged. */
2926 static rtx equiv_mem
;
2928 /* Set nonzero if EQUIV_MEM is modified. */
2929 static int equiv_mem_modified
;
2931 /* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
2932 Called via note_stores. */
2934 validate_equiv_mem_from_store (rtx dest
, const_rtx set ATTRIBUTE_UNUSED
,
2935 void *data ATTRIBUTE_UNUSED
)
2938 && reg_overlap_mentioned_p (dest
, equiv_mem
))
2940 && anti_dependence (equiv_mem
, dest
)))
2941 equiv_mem_modified
= 1;
2944 /* Verify that no store between START and the death of REG invalidates
2945 MEMREF. MEMREF is invalidated by modifying a register used in MEMREF,
2946 by storing into an overlapping memory location, or with a non-const
2949 Return 1 if MEMREF remains valid. */
2951 validate_equiv_mem (rtx_insn
*start
, rtx reg
, rtx memref
)
2957 equiv_mem_modified
= 0;
2959 /* If the memory reference has side effects or is volatile, it isn't a
2960 valid equivalence. */
2961 if (side_effects_p (memref
))
2964 for (insn
= start
; insn
&& ! equiv_mem_modified
; insn
= NEXT_INSN (insn
))
2966 if (! INSN_P (insn
))
2969 if (find_reg_note (insn
, REG_DEAD
, reg
))
2972 /* This used to ignore readonly memory and const/pure calls. The problem
2973 is the equivalent form may reference a pseudo which gets assigned a
2974 call clobbered hard reg. When we later replace REG with its
2975 equivalent form, the value in the call-clobbered reg has been
2976 changed and all hell breaks loose. */
2980 note_stores (PATTERN (insn
), validate_equiv_mem_from_store
, NULL
);
2982 /* If a register mentioned in MEMREF is modified via an
2983 auto-increment, we lose the equivalence. Do the same if one
2984 dies; although we could extend the life, it doesn't seem worth
2987 for (note
= REG_NOTES (insn
); note
; note
= XEXP (note
, 1))
2988 if ((REG_NOTE_KIND (note
) == REG_INC
2989 || REG_NOTE_KIND (note
) == REG_DEAD
)
2990 && REG_P (XEXP (note
, 0))
2991 && reg_overlap_mentioned_p (XEXP (note
, 0), memref
))
2998 /* Returns zero if X is known to be invariant. */
3000 equiv_init_varies_p (rtx x
)
3002 RTX_CODE code
= GET_CODE (x
);
3009 return !MEM_READONLY_P (x
) || equiv_init_varies_p (XEXP (x
, 0));
3018 return reg_equiv
[REGNO (x
)].replace
== 0 && rtx_varies_p (x
, 0);
3021 if (MEM_VOLATILE_P (x
))
3030 fmt
= GET_RTX_FORMAT (code
);
3031 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
3034 if (equiv_init_varies_p (XEXP (x
, i
)))
3037 else if (fmt
[i
] == 'E')
3040 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
3041 if (equiv_init_varies_p (XVECEXP (x
, i
, j
)))
3048 /* Returns nonzero if X (used to initialize register REGNO) is movable.
3049 X is only movable if the registers it uses have equivalent initializations
3050 which appear to be within the same loop (or in an inner loop) and movable
3051 or if they are not candidates for local_alloc and don't vary. */
3053 equiv_init_movable_p (rtx x
, int regno
)
3057 enum rtx_code code
= GET_CODE (x
);
3062 return equiv_init_movable_p (SET_SRC (x
), regno
);
3077 return ((reg_equiv
[REGNO (x
)].loop_depth
>= reg_equiv
[regno
].loop_depth
3078 && reg_equiv
[REGNO (x
)].replace
)
3079 || (REG_BASIC_BLOCK (REGNO (x
)) < NUM_FIXED_BLOCKS
3080 && ! rtx_varies_p (x
, 0)));
3082 case UNSPEC_VOLATILE
:
3086 if (MEM_VOLATILE_P (x
))
3095 fmt
= GET_RTX_FORMAT (code
);
3096 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
3100 if (! equiv_init_movable_p (XEXP (x
, i
), regno
))
3104 for (j
= XVECLEN (x
, i
) - 1; j
>= 0; j
--)
3105 if (! equiv_init_movable_p (XVECEXP (x
, i
, j
), regno
))
3113 /* TRUE if X uses any registers for which reg_equiv[REGNO].replace is
3116 contains_replace_regs (rtx x
)
3120 enum rtx_code code
= GET_CODE (x
);
3134 return reg_equiv
[REGNO (x
)].replace
;
3140 fmt
= GET_RTX_FORMAT (code
);
3141 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
3145 if (contains_replace_regs (XEXP (x
, i
)))
3149 for (j
= XVECLEN (x
, i
) - 1; j
>= 0; j
--)
3150 if (contains_replace_regs (XVECEXP (x
, i
, j
)))
3158 /* TRUE if X references a memory location that would be affected by a store
3161 memref_referenced_p (rtx memref
, rtx x
)
3165 enum rtx_code code
= GET_CODE (x
);
3180 return (reg_equiv
[REGNO (x
)].replacement
3181 && memref_referenced_p (memref
,
3182 reg_equiv
[REGNO (x
)].replacement
));
3185 if (true_dependence (memref
, VOIDmode
, x
))
3190 /* If we are setting a MEM, it doesn't count (its address does), but any
3191 other SET_DEST that has a MEM in it is referencing the MEM. */
3192 if (MEM_P (SET_DEST (x
)))
3194 if (memref_referenced_p (memref
, XEXP (SET_DEST (x
), 0)))
3197 else if (memref_referenced_p (memref
, SET_DEST (x
)))
3200 return memref_referenced_p (memref
, SET_SRC (x
));
3206 fmt
= GET_RTX_FORMAT (code
);
3207 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
3211 if (memref_referenced_p (memref
, XEXP (x
, i
)))
3215 for (j
= XVECLEN (x
, i
) - 1; j
>= 0; j
--)
3216 if (memref_referenced_p (memref
, XVECEXP (x
, i
, j
)))
3224 /* TRUE if some insn in the range (START, END] references a memory location
3225 that would be affected by a store to MEMREF. */
3227 memref_used_between_p (rtx memref
, rtx_insn
*start
, rtx_insn
*end
)
3231 for (insn
= NEXT_INSN (start
); insn
!= NEXT_INSN (end
);
3232 insn
= NEXT_INSN (insn
))
3234 if (!NONDEBUG_INSN_P (insn
))
3237 if (memref_referenced_p (memref
, PATTERN (insn
)))
3240 /* Nonconst functions may access memory. */
3241 if (CALL_P (insn
) && (! RTL_CONST_CALL_P (insn
)))
3248 /* Mark REG as having no known equivalence.
3249 Some instructions might have been processed before and furnished
3250 with REG_EQUIV notes for this register; these notes will have to be
3252 STORE is the piece of RTL that does the non-constant / conflicting
3253 assignment - a SET, CLOBBER or REG_INC note. It is currently not used,
3254 but needs to be there because this function is called from note_stores. */
3256 no_equiv (rtx reg
, const_rtx store ATTRIBUTE_UNUSED
,
3257 void *data ATTRIBUTE_UNUSED
)
3260 rtx_insn_list
*list
;
3264 regno
= REGNO (reg
);
3265 reg_equiv
[regno
].no_equiv
= 1;
3266 list
= reg_equiv
[regno
].init_insns
;
3267 if (list
&& list
->insn () == NULL
)
3269 reg_equiv
[regno
].init_insns
= gen_rtx_INSN_LIST (VOIDmode
, NULL_RTX
, NULL
);
3270 reg_equiv
[regno
].replacement
= NULL_RTX
;
3271 /* This doesn't matter for equivalences made for argument registers, we
3272 should keep their initialization insns. */
3273 if (reg_equiv
[regno
].is_arg_equivalence
)
3275 ira_reg_equiv
[regno
].defined_p
= false;
3276 ira_reg_equiv
[regno
].init_insns
= NULL
;
3277 for (; list
; list
= list
->next ())
3279 rtx_insn
*insn
= list
->insn ();
3280 remove_note (insn
, find_reg_note (insn
, REG_EQUIV
, NULL_RTX
));
3284 /* Check whether the SUBREG is a paradoxical subreg and set the result
3288 set_paradoxical_subreg (rtx_insn
*insn
, bool *pdx_subregs
)
3290 subrtx_iterator::array_type array
;
3291 FOR_EACH_SUBRTX (iter
, array
, PATTERN (insn
), NONCONST
)
3293 const_rtx subreg
= *iter
;
3294 if (GET_CODE (subreg
) == SUBREG
)
3296 const_rtx reg
= SUBREG_REG (subreg
);
3297 if (REG_P (reg
) && paradoxical_subreg_p (subreg
))
3298 pdx_subregs
[REGNO (reg
)] = true;
3303 /* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
3304 equivalent replacement. */
3307 adjust_cleared_regs (rtx loc
, const_rtx old_rtx ATTRIBUTE_UNUSED
, void *data
)
3311 bitmap cleared_regs
= (bitmap
) data
;
3312 if (bitmap_bit_p (cleared_regs
, REGNO (loc
)))
3313 return simplify_replace_fn_rtx (copy_rtx (*reg_equiv
[REGNO (loc
)].src_p
),
3314 NULL_RTX
, adjust_cleared_regs
, data
);
3319 /* Nonzero if we recorded an equivalence for a LABEL_REF. */
3320 static int recorded_label_ref
;
3322 /* Find registers that are equivalent to a single value throughout the
3323 compilation (either because they can be referenced in memory or are
3324 set once from a single constant). Lower their priority for a
3327 If such a register is only referenced once, try substituting its
3328 value into the using insn. If it succeeds, we can eliminate the
3329 register completely.
3331 Initialize init_insns in ira_reg_equiv array.
3333 Return non-zero if jump label rebuilding should be done. */
3335 update_equiv_regs (void)
3340 bitmap cleared_regs
;
3343 /* We need to keep track of whether or not we recorded a LABEL_REF so
3344 that we know if the jump optimizer needs to be rerun. */
3345 recorded_label_ref
= 0;
3347 /* Use pdx_subregs to show whether a reg is used in a paradoxical
3349 pdx_subregs
= XCNEWVEC (bool, max_regno
);
3351 reg_equiv
= XCNEWVEC (struct equivalence
, max_regno
);
3354 init_alias_analysis ();
3356 /* Scan insns and set pdx_subregs[regno] if the reg is used in a
3357 paradoxical subreg. Don't set such reg equivalent to a mem,
3358 because lra will not substitute such equiv memory in order to
3359 prevent access beyond allocated memory for paradoxical memory subreg. */
3360 FOR_EACH_BB_FN (bb
, cfun
)
3361 FOR_BB_INSNS (bb
, insn
)
3362 if (NONDEBUG_INSN_P (insn
))
3363 set_paradoxical_subreg (insn
, pdx_subregs
);
3365 /* Scan the insns and find which registers have equivalences. Do this
3366 in a separate scan of the insns because (due to -fcse-follow-jumps)
3367 a register can be set below its use. */
3368 FOR_EACH_BB_FN (bb
, cfun
)
3370 loop_depth
= bb_loop_depth (bb
);
3372 for (insn
= BB_HEAD (bb
);
3373 insn
!= NEXT_INSN (BB_END (bb
));
3374 insn
= NEXT_INSN (insn
))
3381 if (! INSN_P (insn
))
3384 for (note
= REG_NOTES (insn
); note
; note
= XEXP (note
, 1))
3385 if (REG_NOTE_KIND (note
) == REG_INC
)
3386 no_equiv (XEXP (note
, 0), note
, NULL
);
3388 set
= single_set (insn
);
3390 /* If this insn contains more (or less) than a single SET,
3391 only mark all destinations as having no known equivalence. */
3392 if (set
== NULL_RTX
)
3394 note_stores (PATTERN (insn
), no_equiv
, NULL
);
3397 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
3401 for (i
= XVECLEN (PATTERN (insn
), 0) - 1; i
>= 0; i
--)
3403 rtx part
= XVECEXP (PATTERN (insn
), 0, i
);
3405 note_stores (part
, no_equiv
, NULL
);
3409 dest
= SET_DEST (set
);
3410 src
= SET_SRC (set
);
3412 /* See if this is setting up the equivalence between an argument
3413 register and its stack slot. */
3414 note
= find_reg_note (insn
, REG_EQUIV
, NULL_RTX
);
3417 gcc_assert (REG_P (dest
));
3418 regno
= REGNO (dest
);
3420 /* Note that we don't want to clear init_insns in
3421 ira_reg_equiv even if there are multiple sets of this
3423 reg_equiv
[regno
].is_arg_equivalence
= 1;
3425 /* The insn result can have equivalence memory although
3426 the equivalence is not set up by the insn. We add
3427 this insn to init insns as it is a flag for now that
3428 regno has an equivalence. We will remove the insn
3429 from init insn list later. */
3430 if (rtx_equal_p (src
, XEXP (note
, 0)) || MEM_P (XEXP (note
, 0)))
3431 ira_reg_equiv
[regno
].init_insns
3432 = gen_rtx_INSN_LIST (VOIDmode
, insn
,
3433 ira_reg_equiv
[regno
].init_insns
);
3435 /* Continue normally in case this is a candidate for
3442 /* We only handle the case of a pseudo register being set
3443 once, or always to the same value. */
3444 /* ??? The mn10200 port breaks if we add equivalences for
3445 values that need an ADDRESS_REGS register and set them equivalent
3446 to a MEM of a pseudo. The actual problem is in the over-conservative
3447 handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
3448 calculate_needs, but we traditionally work around this problem
3449 here by rejecting equivalences when the destination is in a register
3450 that's likely spilled. This is fragile, of course, since the
3451 preferred class of a pseudo depends on all instructions that set
3455 || (regno
= REGNO (dest
)) < FIRST_PSEUDO_REGISTER
3456 || (reg_equiv
[regno
].init_insns
3457 && reg_equiv
[regno
].init_insns
->insn () == NULL
)
3458 || (targetm
.class_likely_spilled_p (reg_preferred_class (regno
))
3459 && MEM_P (src
) && ! reg_equiv
[regno
].is_arg_equivalence
))
3461 /* This might be setting a SUBREG of a pseudo, a pseudo that is
3462 also set somewhere else to a constant. */
3463 note_stores (set
, no_equiv
, NULL
);
3467 /* Don't set reg (if pdx_subregs[regno] == true) equivalent to a mem. */
3468 if (MEM_P (src
) && pdx_subregs
[regno
])
3470 note_stores (set
, no_equiv
, NULL
);
3474 note
= find_reg_note (insn
, REG_EQUAL
, NULL_RTX
);
3476 /* cse sometimes generates function invariants, but doesn't put a
3477 REG_EQUAL note on the insn. Since this note would be redundant,
3478 there's no point creating it earlier than here. */
3479 if (! note
&& ! rtx_varies_p (src
, 0))
3480 note
= set_unique_reg_note (insn
, REG_EQUAL
, copy_rtx (src
));
3482 /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
3483 since it represents a function call. */
3484 if (note
&& GET_CODE (XEXP (note
, 0)) == EXPR_LIST
)
3487 if (DF_REG_DEF_COUNT (regno
) != 1)
3489 bool equal_p
= true;
3490 rtx_insn_list
*list
;
3492 /* If we have already processed this pseudo and determined it
3493 can not have an equivalence, then honor that decision. */
3494 if (reg_equiv
[regno
].no_equiv
)
3498 || rtx_varies_p (XEXP (note
, 0), 0)
3499 || (reg_equiv
[regno
].replacement
3500 && ! rtx_equal_p (XEXP (note
, 0),
3501 reg_equiv
[regno
].replacement
)))
3503 no_equiv (dest
, set
, NULL
);
3507 list
= reg_equiv
[regno
].init_insns
;
3508 for (; list
; list
= list
->next ())
3513 insn_tmp
= list
->insn ();
3514 note_tmp
= find_reg_note (insn_tmp
, REG_EQUAL
, NULL_RTX
);
3515 gcc_assert (note_tmp
);
3516 if (! rtx_equal_p (XEXP (note
, 0), XEXP (note_tmp
, 0)))
3525 no_equiv (dest
, set
, NULL
);
3530 /* Record this insn as initializing this register. */
3531 reg_equiv
[regno
].init_insns
3532 = gen_rtx_INSN_LIST (VOIDmode
, insn
, reg_equiv
[regno
].init_insns
);
3534 /* If this register is known to be equal to a constant, record that
3535 it is always equivalent to the constant. */
3536 if (DF_REG_DEF_COUNT (regno
) == 1
3537 && note
&& ! rtx_varies_p (XEXP (note
, 0), 0))
3539 rtx note_value
= XEXP (note
, 0);
3540 remove_note (insn
, note
);
3541 set_unique_reg_note (insn
, REG_EQUIV
, note_value
);
3544 /* If this insn introduces a "constant" register, decrease the priority
3545 of that register. Record this insn if the register is only used once
3546 more and the equivalence value is the same as our source.
3548 The latter condition is checked for two reasons: First, it is an
3549 indication that it may be more efficient to actually emit the insn
3550 as written (if no registers are available, reload will substitute
3551 the equivalence). Secondly, it avoids problems with any registers
3552 dying in this insn whose death notes would be missed.
3554 If we don't have a REG_EQUIV note, see if this insn is loading
3555 a register used only in one basic block from a MEM. If so, and the
3556 MEM remains unchanged for the life of the register, add a REG_EQUIV
3558 note
= find_reg_note (insn
, REG_EQUIV
, NULL_RTX
);
3560 if (note
== NULL_RTX
&& REG_BASIC_BLOCK (regno
) >= NUM_FIXED_BLOCKS
3561 && MEM_P (SET_SRC (set
))
3562 && validate_equiv_mem (insn
, dest
, SET_SRC (set
)))
3563 note
= set_unique_reg_note (insn
, REG_EQUIV
, copy_rtx (SET_SRC (set
)));
3567 int regno
= REGNO (dest
);
3568 rtx x
= XEXP (note
, 0);
3570 /* If we haven't done so, record for reload that this is an
3571 equivalencing insn. */
3572 if (!reg_equiv
[regno
].is_arg_equivalence
)
3573 ira_reg_equiv
[regno
].init_insns
3574 = gen_rtx_INSN_LIST (VOIDmode
, insn
,
3575 ira_reg_equiv
[regno
].init_insns
);
3577 /* Record whether or not we created a REG_EQUIV note for a LABEL_REF.
3578 We might end up substituting the LABEL_REF for uses of the
3579 pseudo here or later. That kind of transformation may turn an
3580 indirect jump into a direct jump, in which case we must rerun the
3581 jump optimizer to ensure that the JUMP_LABEL fields are valid. */
3582 if (GET_CODE (x
) == LABEL_REF
3583 || (GET_CODE (x
) == CONST
3584 && GET_CODE (XEXP (x
, 0)) == PLUS
3585 && (GET_CODE (XEXP (XEXP (x
, 0), 0)) == LABEL_REF
)))
3586 recorded_label_ref
= 1;
3588 reg_equiv
[regno
].replacement
= x
;
3589 reg_equiv
[regno
].src_p
= &SET_SRC (set
);
3590 reg_equiv
[regno
].loop_depth
= (short) loop_depth
;
3592 /* Don't mess with things live during setjmp. */
3593 if (REG_LIVE_LENGTH (regno
) >= 0 && optimize
)
3595 /* Note that the statement below does not affect the priority
3597 REG_LIVE_LENGTH (regno
) *= 2;
3599 /* If the register is referenced exactly twice, meaning it is
3600 set once and used once, indicate that the reference may be
3601 replaced by the equivalence we computed above. Do this
3602 even if the register is only used in one block so that
3603 dependencies can be handled where the last register is
3604 used in a different block (i.e. HIGH / LO_SUM sequences)
3605 and to reduce the number of registers alive across
3608 if (REG_N_REFS (regno
) == 2
3609 && (rtx_equal_p (x
, src
)
3610 || ! equiv_init_varies_p (src
))
3611 && NONJUMP_INSN_P (insn
)
3612 && equiv_init_movable_p (PATTERN (insn
), regno
))
3613 reg_equiv
[regno
].replace
= 1;
3622 /* A second pass, to gather additional equivalences with memory. This needs
3623 to be done after we know which registers we are going to replace. */
3625 for (insn
= get_insns (); insn
; insn
= NEXT_INSN (insn
))
3630 if (! INSN_P (insn
))
3633 set
= single_set (insn
);
3637 dest
= SET_DEST (set
);
3638 src
= SET_SRC (set
);
3640 /* If this sets a MEM to the contents of a REG that is only used
3641 in a single basic block, see if the register is always equivalent
3642 to that memory location and if moving the store from INSN to the
3643 insn that set REG is safe. If so, put a REG_EQUIV note on the
3646 Don't add a REG_EQUIV note if the insn already has one. The existing
3647 REG_EQUIV is likely more useful than the one we are adding.
3649 If one of the regs in the address has reg_equiv[REGNO].replace set,
3650 then we can't add this REG_EQUIV note. The reg_equiv[REGNO].replace
3651 optimization may move the set of this register immediately before
3652 insn, which puts it after reg_equiv[REGNO].init_insns, and hence
3653 the mention in the REG_EQUIV note would be to an uninitialized
3656 if (MEM_P (dest
) && REG_P (src
)
3657 && (regno
= REGNO (src
)) >= FIRST_PSEUDO_REGISTER
3658 && REG_BASIC_BLOCK (regno
) >= NUM_FIXED_BLOCKS
3659 && DF_REG_DEF_COUNT (regno
) == 1
3660 && reg_equiv
[regno
].init_insns
!= NULL
3661 && reg_equiv
[regno
].init_insns
->insn () != NULL
3662 && ! find_reg_note (XEXP (reg_equiv
[regno
].init_insns
, 0),
3663 REG_EQUIV
, NULL_RTX
)
3664 && ! contains_replace_regs (XEXP (dest
, 0))
3665 && ! pdx_subregs
[regno
])
3667 rtx_insn
*init_insn
=
3668 as_a
<rtx_insn
*> (XEXP (reg_equiv
[regno
].init_insns
, 0));
3669 if (validate_equiv_mem (init_insn
, src
, dest
)
3670 && ! memref_used_between_p (dest
, init_insn
, insn
)
3671 /* Attaching a REG_EQUIV note will fail if INIT_INSN has
3673 && set_unique_reg_note (init_insn
, REG_EQUIV
, copy_rtx (dest
)))
3675 /* This insn makes the equivalence, not the one initializing
3677 ira_reg_equiv
[regno
].init_insns
3678 = gen_rtx_INSN_LIST (VOIDmode
, insn
, NULL_RTX
);
3679 df_notes_rescan (init_insn
);
3684 cleared_regs
= BITMAP_ALLOC (NULL
);
3685 /* Now scan all regs killed in an insn to see if any of them are
3686 registers only used that once. If so, see if we can replace the
3687 reference with the equivalent form. If we can, delete the
3688 initializing reference and this register will go away. If we
3689 can't replace the reference, and the initializing reference is
3690 within the same loop (or in an inner loop), then move the register
3691 initialization just before the use, so that they are in the same
3693 FOR_EACH_BB_REVERSE_FN (bb
, cfun
)
3695 loop_depth
= bb_loop_depth (bb
);
3696 for (insn
= BB_END (bb
);
3697 insn
!= PREV_INSN (BB_HEAD (bb
));
3698 insn
= PREV_INSN (insn
))
3702 if (! INSN_P (insn
))
3705 /* Don't substitute into a non-local goto, this confuses CFG. */
3707 && find_reg_note (insn
, REG_NON_LOCAL_GOTO
, NULL_RTX
))
3710 for (link
= REG_NOTES (insn
); link
; link
= XEXP (link
, 1))
3712 if (REG_NOTE_KIND (link
) == REG_DEAD
3713 /* Make sure this insn still refers to the register. */
3714 && reg_mentioned_p (XEXP (link
, 0), PATTERN (insn
)))
3716 int regno
= REGNO (XEXP (link
, 0));
3719 if (! reg_equiv
[regno
].replace
3720 || reg_equiv
[regno
].loop_depth
< (short) loop_depth
3721 /* There is no sense to move insns if live range
3722 shrinkage or register pressure-sensitive
3723 scheduling were done because it will not
3724 improve allocation but worsen insn schedule
3725 with a big probability. */
3726 || flag_live_range_shrinkage
3727 || (flag_sched_pressure
&& flag_schedule_insns
))
3730 /* reg_equiv[REGNO].replace gets set only when
3731 REG_N_REFS[REGNO] is 2, i.e. the register is set
3732 once and used once. (If it were only set, but
3733 not used, flow would have deleted the setting
3734 insns.) Hence there can only be one insn in
3735 reg_equiv[REGNO].init_insns. */
3736 gcc_assert (reg_equiv
[regno
].init_insns
3737 && !XEXP (reg_equiv
[regno
].init_insns
, 1));
3738 equiv_insn
= XEXP (reg_equiv
[regno
].init_insns
, 0);
3740 /* We may not move instructions that can throw, since
3741 that changes basic block boundaries and we are not
3742 prepared to adjust the CFG to match. */
3743 if (can_throw_internal (equiv_insn
))
3746 if (asm_noperands (PATTERN (equiv_insn
)) < 0
3747 && validate_replace_rtx (regno_reg_rtx
[regno
],
3748 *(reg_equiv
[regno
].src_p
), insn
))
3754 /* Find the last note. */
3755 for (last_link
= link
; XEXP (last_link
, 1);
3756 last_link
= XEXP (last_link
, 1))
3759 /* Append the REG_DEAD notes from equiv_insn. */
3760 equiv_link
= REG_NOTES (equiv_insn
);
3764 equiv_link
= XEXP (equiv_link
, 1);
3765 if (REG_NOTE_KIND (note
) == REG_DEAD
)
3767 remove_note (equiv_insn
, note
);
3768 XEXP (last_link
, 1) = note
;
3769 XEXP (note
, 1) = NULL_RTX
;
3774 remove_death (regno
, insn
);
3775 SET_REG_N_REFS (regno
, 0);
3776 REG_FREQ (regno
) = 0;
3777 delete_insn (equiv_insn
);
3779 reg_equiv
[regno
].init_insns
3780 = reg_equiv
[regno
].init_insns
->next ();
3782 ira_reg_equiv
[regno
].init_insns
= NULL
;
3783 bitmap_set_bit (cleared_regs
, regno
);
3785 /* Move the initialization of the register to just before
3786 INSN. Update the flow information. */
3787 else if (prev_nondebug_insn (insn
) != equiv_insn
)
3791 new_insn
= emit_insn_before (PATTERN (equiv_insn
), insn
);
3792 REG_NOTES (new_insn
) = REG_NOTES (equiv_insn
);
3793 REG_NOTES (equiv_insn
) = 0;
3794 /* Rescan it to process the notes. */
3795 df_insn_rescan (new_insn
);
3797 /* Make sure this insn is recognized before
3798 reload begins, otherwise
3799 eliminate_regs_in_insn will die. */
3800 INSN_CODE (new_insn
) = INSN_CODE (equiv_insn
);
3802 delete_insn (equiv_insn
);
3804 XEXP (reg_equiv
[regno
].init_insns
, 0) = new_insn
;
3806 REG_BASIC_BLOCK (regno
) = bb
->index
;
3807 REG_N_CALLS_CROSSED (regno
) = 0;
3808 REG_FREQ_CALLS_CROSSED (regno
) = 0;
3809 REG_N_THROWING_CALLS_CROSSED (regno
) = 0;
3810 REG_LIVE_LENGTH (regno
) = 2;
3812 if (insn
== BB_HEAD (bb
))
3813 BB_HEAD (bb
) = PREV_INSN (insn
);
3815 ira_reg_equiv
[regno
].init_insns
3816 = gen_rtx_INSN_LIST (VOIDmode
, new_insn
, NULL_RTX
);
3817 bitmap_set_bit (cleared_regs
, regno
);
3824 if (!bitmap_empty_p (cleared_regs
))
3826 FOR_EACH_BB_FN (bb
, cfun
)
3828 bitmap_and_compl_into (DF_LR_IN (bb
), cleared_regs
);
3829 bitmap_and_compl_into (DF_LR_OUT (bb
), cleared_regs
);
3832 bitmap_and_compl_into (DF_LIVE_IN (bb
), cleared_regs
);
3833 bitmap_and_compl_into (DF_LIVE_OUT (bb
), cleared_regs
);
3836 /* Last pass - adjust debug insns referencing cleared regs. */
3837 if (MAY_HAVE_DEBUG_INSNS
)
3838 for (insn
= get_insns (); insn
; insn
= NEXT_INSN (insn
))
3839 if (DEBUG_INSN_P (insn
))
3841 rtx old_loc
= INSN_VAR_LOCATION_LOC (insn
);
3842 INSN_VAR_LOCATION_LOC (insn
)
3843 = simplify_replace_fn_rtx (old_loc
, NULL_RTX
,
3844 adjust_cleared_regs
,
3845 (void *) cleared_regs
);
3846 if (old_loc
!= INSN_VAR_LOCATION_LOC (insn
))
3847 df_insn_rescan (insn
);
3851 BITMAP_FREE (cleared_regs
);
3856 end_alias_analysis ();
3859 return recorded_label_ref
;
3864 /* Set up fields memory, constant, and invariant from init_insns in
3865 the structures of array ira_reg_equiv. */
3867 setup_reg_equiv (void)
3870 rtx_insn_list
*elem
, *prev_elem
, *next_elem
;
3874 for (i
= FIRST_PSEUDO_REGISTER
; i
< ira_reg_equiv_len
; i
++)
3875 for (prev_elem
= NULL
, elem
= ira_reg_equiv
[i
].init_insns
;
3877 prev_elem
= elem
, elem
= next_elem
)
3879 next_elem
= elem
->next ();
3880 insn
= elem
->insn ();
3881 set
= single_set (insn
);
3883 /* Init insns can set up equivalence when the reg is a destination or
3884 a source (in this case the destination is memory). */
3885 if (set
!= 0 && (REG_P (SET_DEST (set
)) || REG_P (SET_SRC (set
))))
3887 if ((x
= find_reg_note (insn
, REG_EQUIV
, NULL_RTX
)) != NULL
)
3890 if (REG_P (SET_DEST (set
))
3891 && REGNO (SET_DEST (set
)) == (unsigned int) i
3892 && ! rtx_equal_p (SET_SRC (set
), x
) && MEM_P (x
))
3894 /* This insn reporting the equivalence but
3895 actually not setting it. Remove it from the
3897 if (prev_elem
== NULL
)
3898 ira_reg_equiv
[i
].init_insns
= next_elem
;
3900 XEXP (prev_elem
, 1) = next_elem
;
3904 else if (REG_P (SET_DEST (set
))
3905 && REGNO (SET_DEST (set
)) == (unsigned int) i
)
3909 gcc_assert (REG_P (SET_SRC (set
))
3910 && REGNO (SET_SRC (set
)) == (unsigned int) i
);
3913 if (! function_invariant_p (x
)
3915 /* A function invariant is often CONSTANT_P but may
3916 include a register. We promise to only pass
3917 CONSTANT_P objects to LEGITIMATE_PIC_OPERAND_P. */
3918 || (CONSTANT_P (x
) && LEGITIMATE_PIC_OPERAND_P (x
)))
3920 /* It can happen that a REG_EQUIV note contains a MEM
3921 that is not a legitimate memory operand. As later
3922 stages of reload assume that all addresses found in
3923 the lra_regno_equiv_* arrays were originally
3924 legitimate, we ignore such REG_EQUIV notes. */
3925 if (memory_operand (x
, VOIDmode
))
3927 ira_reg_equiv
[i
].defined_p
= true;
3928 ira_reg_equiv
[i
].memory
= x
;
3931 else if (function_invariant_p (x
))
3933 enum machine_mode mode
;
3935 mode
= GET_MODE (SET_DEST (set
));
3936 if (GET_CODE (x
) == PLUS
3937 || x
== frame_pointer_rtx
|| x
== arg_pointer_rtx
)
3938 /* This is PLUS of frame pointer and a constant,
3940 ira_reg_equiv
[i
].invariant
= x
;
3941 else if (targetm
.legitimate_constant_p (mode
, x
))
3942 ira_reg_equiv
[i
].constant
= x
;
3945 ira_reg_equiv
[i
].memory
= force_const_mem (mode
, x
);
3946 if (ira_reg_equiv
[i
].memory
== NULL_RTX
)
3948 ira_reg_equiv
[i
].defined_p
= false;
3949 ira_reg_equiv
[i
].init_insns
= NULL
;
3953 ira_reg_equiv
[i
].defined_p
= true;
3958 ira_reg_equiv
[i
].defined_p
= false;
3959 ira_reg_equiv
[i
].init_insns
= NULL
;
3966 /* Print chain C to FILE. */
3968 print_insn_chain (FILE *file
, struct insn_chain
*c
)
3970 fprintf (file
, "insn=%d, ", INSN_UID (c
->insn
));
3971 bitmap_print (file
, &c
->live_throughout
, "live_throughout: ", ", ");
3972 bitmap_print (file
, &c
->dead_or_set
, "dead_or_set: ", "\n");
3976 /* Print all reload_insn_chains to FILE. */
3978 print_insn_chains (FILE *file
)
3980 struct insn_chain
*c
;
3981 for (c
= reload_insn_chain
; c
; c
= c
->next
)
3982 print_insn_chain (file
, c
);
3985 /* Return true if pseudo REGNO should be added to set live_throughout
3986 or dead_or_set of the insn chains for reload consideration. */
3988 pseudo_for_reload_consideration_p (int regno
)
3990 /* Consider spilled pseudos too for IRA because they still have a
3991 chance to get hard-registers in the reload when IRA is used. */
3992 return (reg_renumber
[regno
] >= 0 || ira_conflicts_p
);
3995 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
3996 REG to the number of nregs, and INIT_VALUE to get the
3997 initialization. ALLOCNUM need not be the regno of REG. */
3999 init_live_subregs (bool init_value
, sbitmap
*live_subregs
,
4000 bitmap live_subregs_used
, int allocnum
, rtx reg
)
4002 unsigned int regno
= REGNO (SUBREG_REG (reg
));
4003 int size
= GET_MODE_SIZE (GET_MODE (regno_reg_rtx
[regno
]));
4005 gcc_assert (size
> 0);
4007 /* Been there, done that. */
4008 if (bitmap_bit_p (live_subregs_used
, allocnum
))
4011 /* Create a new one. */
4012 if (live_subregs
[allocnum
] == NULL
)
4013 live_subregs
[allocnum
] = sbitmap_alloc (size
);
4015 /* If the entire reg was live before blasting into subregs, we need
4016 to init all of the subregs to ones else init to 0. */
4018 bitmap_ones (live_subregs
[allocnum
]);
4020 bitmap_clear (live_subregs
[allocnum
]);
4022 bitmap_set_bit (live_subregs_used
, allocnum
);
4025 /* Walk the insns of the current function and build reload_insn_chain,
4026 and record register life information. */
4028 build_insn_chain (void)
4031 struct insn_chain
**p
= &reload_insn_chain
;
4033 struct insn_chain
*c
= NULL
;
4034 struct insn_chain
*next
= NULL
;
4035 bitmap live_relevant_regs
= BITMAP_ALLOC (NULL
);
4036 bitmap elim_regset
= BITMAP_ALLOC (NULL
);
4037 /* live_subregs is a vector used to keep accurate information about
4038 which hardregs are live in multiword pseudos. live_subregs and
4039 live_subregs_used are indexed by pseudo number. The live_subreg
4040 entry for a particular pseudo is only used if the corresponding
4041 element is non zero in live_subregs_used. The sbitmap size of
4042 live_subreg[allocno] is number of bytes that the pseudo can
4044 sbitmap
*live_subregs
= XCNEWVEC (sbitmap
, max_regno
);
4045 bitmap live_subregs_used
= BITMAP_ALLOC (NULL
);
4047 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
4048 if (TEST_HARD_REG_BIT (eliminable_regset
, i
))
4049 bitmap_set_bit (elim_regset
, i
);
4050 FOR_EACH_BB_REVERSE_FN (bb
, cfun
)
4055 CLEAR_REG_SET (live_relevant_regs
);
4056 bitmap_clear (live_subregs_used
);
4058 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb
), 0, i
, bi
)
4060 if (i
>= FIRST_PSEUDO_REGISTER
)
4062 bitmap_set_bit (live_relevant_regs
, i
);
4065 EXECUTE_IF_SET_IN_BITMAP (df_get_live_out (bb
),
4066 FIRST_PSEUDO_REGISTER
, i
, bi
)
4068 if (pseudo_for_reload_consideration_p (i
))
4069 bitmap_set_bit (live_relevant_regs
, i
);
4072 FOR_BB_INSNS_REVERSE (bb
, insn
)
4074 if (!NOTE_P (insn
) && !BARRIER_P (insn
))
4076 struct df_insn_info
*insn_info
= DF_INSN_INFO_GET (insn
);
4079 c
= new_insn_chain ();
4086 c
->block
= bb
->index
;
4088 if (NONDEBUG_INSN_P (insn
))
4089 FOR_EACH_INSN_INFO_DEF (def
, insn_info
)
4091 unsigned int regno
= DF_REF_REGNO (def
);
4093 /* Ignore may clobbers because these are generated
4094 from calls. However, every other kind of def is
4095 added to dead_or_set. */
4096 if (!DF_REF_FLAGS_IS_SET (def
, DF_REF_MAY_CLOBBER
))
4098 if (regno
< FIRST_PSEUDO_REGISTER
)
4100 if (!fixed_regs
[regno
])
4101 bitmap_set_bit (&c
->dead_or_set
, regno
);
4103 else if (pseudo_for_reload_consideration_p (regno
))
4104 bitmap_set_bit (&c
->dead_or_set
, regno
);
4107 if ((regno
< FIRST_PSEUDO_REGISTER
4108 || reg_renumber
[regno
] >= 0
4110 && (!DF_REF_FLAGS_IS_SET (def
, DF_REF_CONDITIONAL
)))
4112 rtx reg
= DF_REF_REG (def
);
4114 /* We can model subregs, but not if they are
4115 wrapped in ZERO_EXTRACTS. */
4116 if (GET_CODE (reg
) == SUBREG
4117 && !DF_REF_FLAGS_IS_SET (def
, DF_REF_ZERO_EXTRACT
))
4119 unsigned int start
= SUBREG_BYTE (reg
);
4120 unsigned int last
= start
4121 + GET_MODE_SIZE (GET_MODE (reg
));
4124 (bitmap_bit_p (live_relevant_regs
, regno
),
4125 live_subregs
, live_subregs_used
, regno
, reg
);
4127 if (!DF_REF_FLAGS_IS_SET
4128 (def
, DF_REF_STRICT_LOW_PART
))
4130 /* Expand the range to cover entire words.
4131 Bytes added here are "don't care". */
4133 = start
/ UNITS_PER_WORD
* UNITS_PER_WORD
;
4134 last
= ((last
+ UNITS_PER_WORD
- 1)
4135 / UNITS_PER_WORD
* UNITS_PER_WORD
);
4138 /* Ignore the paradoxical bits. */
4139 if (last
> SBITMAP_SIZE (live_subregs
[regno
]))
4140 last
= SBITMAP_SIZE (live_subregs
[regno
]);
4142 while (start
< last
)
4144 bitmap_clear_bit (live_subregs
[regno
], start
);
4148 if (bitmap_empty_p (live_subregs
[regno
]))
4150 bitmap_clear_bit (live_subregs_used
, regno
);
4151 bitmap_clear_bit (live_relevant_regs
, regno
);
4154 /* Set live_relevant_regs here because
4155 that bit has to be true to get us to
4156 look at the live_subregs fields. */
4157 bitmap_set_bit (live_relevant_regs
, regno
);
4161 /* DF_REF_PARTIAL is generated for
4162 subregs, STRICT_LOW_PART, and
4163 ZERO_EXTRACT. We handle the subreg
4164 case above so here we have to keep from
4165 modeling the def as a killing def. */
4166 if (!DF_REF_FLAGS_IS_SET (def
, DF_REF_PARTIAL
))
4168 bitmap_clear_bit (live_subregs_used
, regno
);
4169 bitmap_clear_bit (live_relevant_regs
, regno
);
4175 bitmap_and_compl_into (live_relevant_regs
, elim_regset
);
4176 bitmap_copy (&c
->live_throughout
, live_relevant_regs
);
4178 if (NONDEBUG_INSN_P (insn
))
4179 FOR_EACH_INSN_INFO_USE (use
, insn_info
)
4181 unsigned int regno
= DF_REF_REGNO (use
);
4182 rtx reg
= DF_REF_REG (use
);
4184 /* DF_REF_READ_WRITE on a use means that this use
4185 is fabricated from a def that is a partial set
4186 to a multiword reg. Here, we only model the
4187 subreg case that is not wrapped in ZERO_EXTRACT
4188 precisely so we do not need to look at the
4190 if (DF_REF_FLAGS_IS_SET (use
, DF_REF_READ_WRITE
)
4191 && !DF_REF_FLAGS_IS_SET (use
, DF_REF_ZERO_EXTRACT
)
4192 && DF_REF_FLAGS_IS_SET (use
, DF_REF_SUBREG
))
4195 /* Add the last use of each var to dead_or_set. */
4196 if (!bitmap_bit_p (live_relevant_regs
, regno
))
4198 if (regno
< FIRST_PSEUDO_REGISTER
)
4200 if (!fixed_regs
[regno
])
4201 bitmap_set_bit (&c
->dead_or_set
, regno
);
4203 else if (pseudo_for_reload_consideration_p (regno
))
4204 bitmap_set_bit (&c
->dead_or_set
, regno
);
4207 if (regno
< FIRST_PSEUDO_REGISTER
4208 || pseudo_for_reload_consideration_p (regno
))
4210 if (GET_CODE (reg
) == SUBREG
4211 && !DF_REF_FLAGS_IS_SET (use
,
4213 | DF_REF_ZERO_EXTRACT
))
4215 unsigned int start
= SUBREG_BYTE (reg
);
4216 unsigned int last
= start
4217 + GET_MODE_SIZE (GET_MODE (reg
));
4220 (bitmap_bit_p (live_relevant_regs
, regno
),
4221 live_subregs
, live_subregs_used
, regno
, reg
);
4223 /* Ignore the paradoxical bits. */
4224 if (last
> SBITMAP_SIZE (live_subregs
[regno
]))
4225 last
= SBITMAP_SIZE (live_subregs
[regno
]);
4227 while (start
< last
)
4229 bitmap_set_bit (live_subregs
[regno
], start
);
4234 /* Resetting the live_subregs_used is
4235 effectively saying do not use the subregs
4236 because we are reading the whole
4238 bitmap_clear_bit (live_subregs_used
, regno
);
4239 bitmap_set_bit (live_relevant_regs
, regno
);
4245 /* FIXME!! The following code is a disaster. Reload needs to see the
4246 labels and jump tables that are just hanging out in between
4247 the basic blocks. See pr33676. */
4248 insn
= BB_HEAD (bb
);
4250 /* Skip over the barriers and cruft. */
4251 while (insn
&& (BARRIER_P (insn
) || NOTE_P (insn
)
4252 || BLOCK_FOR_INSN (insn
) == bb
))
4253 insn
= PREV_INSN (insn
);
4255 /* While we add anything except barriers and notes, the focus is
4256 to get the labels and jump tables into the
4257 reload_insn_chain. */
4260 if (!NOTE_P (insn
) && !BARRIER_P (insn
))
4262 if (BLOCK_FOR_INSN (insn
))
4265 c
= new_insn_chain ();
4271 /* The block makes no sense here, but it is what the old
4273 c
->block
= bb
->index
;
4275 bitmap_copy (&c
->live_throughout
, live_relevant_regs
);
4277 insn
= PREV_INSN (insn
);
4281 reload_insn_chain
= c
;
4284 for (i
= 0; i
< (unsigned int) max_regno
; i
++)
4285 if (live_subregs
[i
] != NULL
)
4286 sbitmap_free (live_subregs
[i
]);
4287 free (live_subregs
);
4288 BITMAP_FREE (live_subregs_used
);
4289 BITMAP_FREE (live_relevant_regs
);
4290 BITMAP_FREE (elim_regset
);
4293 print_insn_chains (dump_file
);
4296 /* Examine the rtx found in *LOC, which is read or written to as determined
4297 by TYPE. Return false if we find a reason why an insn containing this
4298 rtx should not be moved (such as accesses to non-constant memory), true
4301 rtx_moveable_p (rtx
*loc
, enum op_type type
)
4305 enum rtx_code code
= GET_CODE (x
);
4308 code
= GET_CODE (x
);
4318 return type
== OP_IN
;
4324 if (x
== frame_pointer_rtx
)
4326 if (HARD_REGISTER_P (x
))
4332 if (type
== OP_IN
&& MEM_READONLY_P (x
))
4333 return rtx_moveable_p (&XEXP (x
, 0), OP_IN
);
4337 return (rtx_moveable_p (&SET_SRC (x
), OP_IN
)
4338 && rtx_moveable_p (&SET_DEST (x
), OP_OUT
));
4340 case STRICT_LOW_PART
:
4341 return rtx_moveable_p (&XEXP (x
, 0), OP_OUT
);
4345 return (rtx_moveable_p (&XEXP (x
, 0), type
)
4346 && rtx_moveable_p (&XEXP (x
, 1), OP_IN
)
4347 && rtx_moveable_p (&XEXP (x
, 2), OP_IN
));
4350 return rtx_moveable_p (&SET_DEST (x
), OP_OUT
);
4356 fmt
= GET_RTX_FORMAT (code
);
4357 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
4361 if (!rtx_moveable_p (&XEXP (x
, i
), type
))
4364 else if (fmt
[i
] == 'E')
4365 for (j
= XVECLEN (x
, i
) - 1; j
>= 0; j
--)
4367 if (!rtx_moveable_p (&XVECEXP (x
, i
, j
), type
))
4374 /* A wrapper around dominated_by_p, which uses the information in UID_LUID
4375 to give dominance relationships between two insns I1 and I2. */
4377 insn_dominated_by_p (rtx i1
, rtx i2
, int *uid_luid
)
4379 basic_block bb1
= BLOCK_FOR_INSN (i1
);
4380 basic_block bb2
= BLOCK_FOR_INSN (i2
);
4383 return uid_luid
[INSN_UID (i2
)] < uid_luid
[INSN_UID (i1
)];
4384 return dominated_by_p (CDI_DOMINATORS
, bb1
, bb2
);
4387 /* Record the range of register numbers added by find_moveable_pseudos. */
4388 int first_moveable_pseudo
, last_moveable_pseudo
;
4390 /* These two vectors hold data for every register added by
4391 find_movable_pseudos, with index 0 holding data for the
4392 first_moveable_pseudo. */
4393 /* The original home register. */
4394 static vec
<rtx
> pseudo_replaced_reg
;
4396 /* Look for instances where we have an instruction that is known to increase
4397 register pressure, and whose result is not used immediately. If it is
4398 possible to move the instruction downwards to just before its first use,
4399 split its lifetime into two ranges. We create a new pseudo to compute the
4400 value, and emit a move instruction just before the first use. If, after
4401 register allocation, the new pseudo remains unallocated, the function
4402 move_unallocated_pseudos then deletes the move instruction and places
4403 the computation just before the first use.
4405 Such a move is safe and profitable if all the input registers remain live
4406 and unchanged between the original computation and its first use. In such
4407 a situation, the computation is known to increase register pressure, and
4408 moving it is known to at least not worsen it.
4410 We restrict moves to only those cases where a register remains unallocated,
4411 in order to avoid interfering too much with the instruction schedule. As
4412 an exception, we may move insns which only modify their input register
4413 (typically induction variables), as this increases the freedom for our
4414 intended transformation, and does not limit the second instruction
4418 find_moveable_pseudos (void)
4421 int max_regs
= max_reg_num ();
4422 int max_uid
= get_max_uid ();
4424 int *uid_luid
= XNEWVEC (int, max_uid
);
4425 rtx_insn
**closest_uses
= XNEWVEC (rtx_insn
*, max_regs
);
4426 /* A set of registers which are live but not modified throughout a block. */
4427 bitmap_head
*bb_transp_live
= XNEWVEC (bitmap_head
,
4428 last_basic_block_for_fn (cfun
));
4429 /* A set of registers which only exist in a given basic block. */
4430 bitmap_head
*bb_local
= XNEWVEC (bitmap_head
,
4431 last_basic_block_for_fn (cfun
));
4432 /* A set of registers which are set once, in an instruction that can be
4433 moved freely downwards, but are otherwise transparent to a block. */
4434 bitmap_head
*bb_moveable_reg_sets
= XNEWVEC (bitmap_head
,
4435 last_basic_block_for_fn (cfun
));
4436 bitmap_head live
, used
, set
, interesting
, unusable_as_input
;
4438 bitmap_initialize (&interesting
, 0);
4440 first_moveable_pseudo
= max_regs
;
4441 pseudo_replaced_reg
.release ();
4442 pseudo_replaced_reg
.safe_grow_cleared (max_regs
);
4445 calculate_dominance_info (CDI_DOMINATORS
);
4448 bitmap_initialize (&live
, 0);
4449 bitmap_initialize (&used
, 0);
4450 bitmap_initialize (&set
, 0);
4451 bitmap_initialize (&unusable_as_input
, 0);
4452 FOR_EACH_BB_FN (bb
, cfun
)
4455 bitmap transp
= bb_transp_live
+ bb
->index
;
4456 bitmap moveable
= bb_moveable_reg_sets
+ bb
->index
;
4457 bitmap local
= bb_local
+ bb
->index
;
4459 bitmap_initialize (local
, 0);
4460 bitmap_initialize (transp
, 0);
4461 bitmap_initialize (moveable
, 0);
4462 bitmap_copy (&live
, df_get_live_out (bb
));
4463 bitmap_and_into (&live
, df_get_live_in (bb
));
4464 bitmap_copy (transp
, &live
);
4465 bitmap_clear (moveable
);
4466 bitmap_clear (&live
);
4467 bitmap_clear (&used
);
4468 bitmap_clear (&set
);
4469 FOR_BB_INSNS (bb
, insn
)
4470 if (NONDEBUG_INSN_P (insn
))
4472 df_insn_info
*insn_info
= DF_INSN_INFO_GET (insn
);
4475 uid_luid
[INSN_UID (insn
)] = i
++;
4477 def
= df_single_def (insn_info
);
4478 use
= df_single_use (insn_info
);
4481 && DF_REF_REGNO (use
) == DF_REF_REGNO (def
)
4482 && !bitmap_bit_p (&set
, DF_REF_REGNO (use
))
4483 && rtx_moveable_p (&PATTERN (insn
), OP_IN
))
4485 unsigned regno
= DF_REF_REGNO (use
);
4486 bitmap_set_bit (moveable
, regno
);
4487 bitmap_set_bit (&set
, regno
);
4488 bitmap_set_bit (&used
, regno
);
4489 bitmap_clear_bit (transp
, regno
);
4492 FOR_EACH_INSN_INFO_USE (use
, insn_info
)
4494 unsigned regno
= DF_REF_REGNO (use
);
4495 bitmap_set_bit (&used
, regno
);
4496 if (bitmap_clear_bit (moveable
, regno
))
4497 bitmap_clear_bit (transp
, regno
);
4500 FOR_EACH_INSN_INFO_DEF (def
, insn_info
)
4502 unsigned regno
= DF_REF_REGNO (def
);
4503 bitmap_set_bit (&set
, regno
);
4504 bitmap_clear_bit (transp
, regno
);
4505 bitmap_clear_bit (moveable
, regno
);
4510 bitmap_clear (&live
);
4511 bitmap_clear (&used
);
4512 bitmap_clear (&set
);
4514 FOR_EACH_BB_FN (bb
, cfun
)
4516 bitmap local
= bb_local
+ bb
->index
;
4519 FOR_BB_INSNS (bb
, insn
)
4520 if (NONDEBUG_INSN_P (insn
))
4522 df_insn_info
*insn_info
= DF_INSN_INFO_GET (insn
);
4524 rtx closest_use
, note
;
4527 bool all_dominated
, all_local
;
4528 enum machine_mode mode
;
4530 def
= df_single_def (insn_info
);
4531 /* There must be exactly one def in this insn. */
4532 if (!def
|| !single_set (insn
))
4534 /* This must be the only definition of the reg. We also limit
4535 which modes we deal with so that we can assume we can generate
4536 move instructions. */
4537 regno
= DF_REF_REGNO (def
);
4538 mode
= GET_MODE (DF_REF_REG (def
));
4539 if (DF_REG_DEF_COUNT (regno
) != 1
4540 || !DF_REF_INSN_INFO (def
)
4541 || HARD_REGISTER_NUM_P (regno
)
4542 || DF_REG_EQ_USE_COUNT (regno
) > 0
4543 || (!INTEGRAL_MODE_P (mode
) && !FLOAT_MODE_P (mode
)))
4545 def_insn
= DF_REF_INSN (def
);
4547 for (note
= REG_NOTES (def_insn
); note
; note
= XEXP (note
, 1))
4548 if (REG_NOTE_KIND (note
) == REG_EQUIV
&& MEM_P (XEXP (note
, 0)))
4554 fprintf (dump_file
, "Ignoring reg %d, has equiv memory\n",
4556 bitmap_set_bit (&unusable_as_input
, regno
);
4560 use
= DF_REG_USE_CHAIN (regno
);
4561 all_dominated
= true;
4563 closest_use
= NULL_RTX
;
4564 for (; use
; use
= DF_REF_NEXT_REG (use
))
4567 if (!DF_REF_INSN_INFO (use
))
4569 all_dominated
= false;
4573 insn
= DF_REF_INSN (use
);
4574 if (DEBUG_INSN_P (insn
))
4576 if (BLOCK_FOR_INSN (insn
) != BLOCK_FOR_INSN (def_insn
))
4578 if (!insn_dominated_by_p (insn
, def_insn
, uid_luid
))
4579 all_dominated
= false;
4580 if (closest_use
!= insn
&& closest_use
!= const0_rtx
)
4582 if (closest_use
== NULL_RTX
)
4584 else if (insn_dominated_by_p (closest_use
, insn
, uid_luid
))
4586 else if (!insn_dominated_by_p (insn
, closest_use
, uid_luid
))
4587 closest_use
= const0_rtx
;
4593 fprintf (dump_file
, "Reg %d not all uses dominated by set\n",
4598 bitmap_set_bit (local
, regno
);
4599 if (closest_use
== const0_rtx
|| closest_use
== NULL
4600 || next_nonnote_nondebug_insn (def_insn
) == closest_use
)
4603 fprintf (dump_file
, "Reg %d uninteresting%s\n", regno
,
4604 closest_use
== const0_rtx
|| closest_use
== NULL
4605 ? " (no unique first use)" : "");
4609 if (reg_referenced_p (cc0_rtx
, PATTERN (closest_use
)))
4612 fprintf (dump_file
, "Reg %d: closest user uses cc0\n",
4617 bitmap_set_bit (&interesting
, regno
);
4618 /* If we get here, we know closest_use is a non-NULL insn
4619 (as opposed to const_0_rtx). */
4620 closest_uses
[regno
] = as_a
<rtx_insn
*> (closest_use
);
4622 if (dump_file
&& (all_local
|| all_dominated
))
4624 fprintf (dump_file
, "Reg %u:", regno
);
4626 fprintf (dump_file
, " local to bb %d", bb
->index
);
4628 fprintf (dump_file
, " def dominates all uses");
4629 if (closest_use
!= const0_rtx
)
4630 fprintf (dump_file
, " has unique first use");
4631 fputs ("\n", dump_file
);
4636 EXECUTE_IF_SET_IN_BITMAP (&interesting
, 0, i
, bi
)
4638 df_ref def
= DF_REG_DEF_CHAIN (i
);
4639 rtx_insn
*def_insn
= DF_REF_INSN (def
);
4640 basic_block def_block
= BLOCK_FOR_INSN (def_insn
);
4641 bitmap def_bb_local
= bb_local
+ def_block
->index
;
4642 bitmap def_bb_moveable
= bb_moveable_reg_sets
+ def_block
->index
;
4643 bitmap def_bb_transp
= bb_transp_live
+ def_block
->index
;
4644 bool local_to_bb_p
= bitmap_bit_p (def_bb_local
, i
);
4645 rtx_insn
*use_insn
= closest_uses
[i
];
4648 bool all_transp
= true;
4650 if (!REG_P (DF_REF_REG (def
)))
4656 fprintf (dump_file
, "Reg %u not local to one basic block\n",
4660 if (reg_equiv_init (i
) != NULL_RTX
)
4663 fprintf (dump_file
, "Ignoring reg %u with equiv init insn\n",
4667 if (!rtx_moveable_p (&PATTERN (def_insn
), OP_IN
))
4670 fprintf (dump_file
, "Found def insn %d for %d to be not moveable\n",
4671 INSN_UID (def_insn
), i
);
4675 fprintf (dump_file
, "Examining insn %d, def for %d\n",
4676 INSN_UID (def_insn
), i
);
4677 FOR_EACH_INSN_USE (use
, def_insn
)
4679 unsigned regno
= DF_REF_REGNO (use
);
4680 if (bitmap_bit_p (&unusable_as_input
, regno
))
4684 fprintf (dump_file
, " found unusable input reg %u.\n", regno
);
4687 if (!bitmap_bit_p (def_bb_transp
, regno
))
4689 if (bitmap_bit_p (def_bb_moveable
, regno
)
4690 && !control_flow_insn_p (use_insn
)
4692 && !sets_cc0_p (use_insn
)
4696 if (modified_between_p (DF_REF_REG (use
), def_insn
, use_insn
))
4698 rtx_insn
*x
= NEXT_INSN (def_insn
);
4699 while (!modified_in_p (DF_REF_REG (use
), x
))
4701 gcc_assert (x
!= use_insn
);
4705 fprintf (dump_file
, " input reg %u modified but insn %d moveable\n",
4706 regno
, INSN_UID (x
));
4707 emit_insn_after (PATTERN (x
), use_insn
);
4708 set_insn_deleted (x
);
4713 fprintf (dump_file
, " input reg %u modified between def and use\n",
4724 if (!dbg_cnt (ira_move
))
4727 fprintf (dump_file
, " all ok%s\n", all_transp
? " and transp" : "");
4731 rtx def_reg
= DF_REF_REG (def
);
4732 rtx newreg
= ira_create_new_reg (def_reg
);
4733 if (validate_change (def_insn
, DF_REF_REAL_LOC (def
), newreg
, 0))
4735 unsigned nregno
= REGNO (newreg
);
4736 emit_insn_before (gen_move_insn (def_reg
, newreg
), use_insn
);
4738 pseudo_replaced_reg
[nregno
] = def_reg
;
4743 FOR_EACH_BB_FN (bb
, cfun
)
4745 bitmap_clear (bb_local
+ bb
->index
);
4746 bitmap_clear (bb_transp_live
+ bb
->index
);
4747 bitmap_clear (bb_moveable_reg_sets
+ bb
->index
);
4749 bitmap_clear (&interesting
);
4750 bitmap_clear (&unusable_as_input
);
4752 free (closest_uses
);
4754 free (bb_transp_live
);
4755 free (bb_moveable_reg_sets
);
4757 last_moveable_pseudo
= max_reg_num ();
4759 fix_reg_equiv_init ();
4761 regstat_free_n_sets_and_refs ();
4763 regstat_init_n_sets_and_refs ();
4764 regstat_compute_ri ();
4765 free_dominance_info (CDI_DOMINATORS
);
4768 /* If SET pattern SET is an assignment from a hard register to a pseudo which
4769 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), return
4770 the destination. Otherwise return NULL. */
4773 interesting_dest_for_shprep_1 (rtx set
, basic_block call_dom
)
4775 rtx src
= SET_SRC (set
);
4776 rtx dest
= SET_DEST (set
);
4777 if (!REG_P (src
) || !HARD_REGISTER_P (src
)
4778 || !REG_P (dest
) || HARD_REGISTER_P (dest
)
4779 || (call_dom
&& !bitmap_bit_p (df_get_live_in (call_dom
), REGNO (dest
))))
4784 /* If insn is interesting for parameter range-splitting shrink-wrapping
4785 preparation, i.e. it is a single set from a hard register to a pseudo, which
4786 is live at CALL_DOM (if non-NULL, otherwise this check is omitted), or a
4787 parallel statement with only one such statement, return the destination.
4788 Otherwise return NULL. */
4791 interesting_dest_for_shprep (rtx_insn
*insn
, basic_block call_dom
)
4795 rtx pat
= PATTERN (insn
);
4796 if (GET_CODE (pat
) == SET
)
4797 return interesting_dest_for_shprep_1 (pat
, call_dom
);
4799 if (GET_CODE (pat
) != PARALLEL
)
4802 for (int i
= 0; i
< XVECLEN (pat
, 0); i
++)
4804 rtx sub
= XVECEXP (pat
, 0, i
);
4805 if (GET_CODE (sub
) == USE
|| GET_CODE (sub
) == CLOBBER
)
4807 if (GET_CODE (sub
) != SET
4808 || side_effects_p (sub
))
4810 rtx dest
= interesting_dest_for_shprep_1 (sub
, call_dom
);
4819 /* Split live ranges of pseudos that are loaded from hard registers in the
4820 first BB in a BB that dominates all non-sibling call if such a BB can be
4821 found and is not in a loop. Return true if the function has made any
4825 split_live_ranges_for_shrink_wrap (void)
4827 basic_block bb
, call_dom
= NULL
;
4828 basic_block first
= single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun
));
4829 rtx_insn
*insn
, *last_interesting_insn
= NULL
;
4830 bitmap_head need_new
, reachable
;
4831 vec
<basic_block
> queue
;
4833 if (!SHRINK_WRAPPING_ENABLED
)
4836 bitmap_initialize (&need_new
, 0);
4837 bitmap_initialize (&reachable
, 0);
4838 queue
.create (n_basic_blocks_for_fn (cfun
));
4840 FOR_EACH_BB_FN (bb
, cfun
)
4841 FOR_BB_INSNS (bb
, insn
)
4842 if (CALL_P (insn
) && !SIBLING_CALL_P (insn
))
4846 bitmap_clear (&need_new
);
4847 bitmap_clear (&reachable
);
4852 bitmap_set_bit (&need_new
, bb
->index
);
4853 bitmap_set_bit (&reachable
, bb
->index
);
4854 queue
.quick_push (bb
);
4858 if (queue
.is_empty ())
4860 bitmap_clear (&need_new
);
4861 bitmap_clear (&reachable
);
4866 while (!queue
.is_empty ())
4872 FOR_EACH_EDGE (e
, ei
, bb
->succs
)
4873 if (e
->dest
!= EXIT_BLOCK_PTR_FOR_FN (cfun
)
4874 && bitmap_set_bit (&reachable
, e
->dest
->index
))
4875 queue
.quick_push (e
->dest
);
4879 FOR_BB_INSNS (first
, insn
)
4881 rtx dest
= interesting_dest_for_shprep (insn
, NULL
);
4885 if (DF_REG_DEF_COUNT (REGNO (dest
)) > 1)
4887 bitmap_clear (&need_new
);
4888 bitmap_clear (&reachable
);
4892 for (df_ref use
= DF_REG_USE_CHAIN (REGNO(dest
));
4894 use
= DF_REF_NEXT_REG (use
))
4896 int ubbi
= DF_REF_BB (use
)->index
;
4897 if (bitmap_bit_p (&reachable
, ubbi
))
4898 bitmap_set_bit (&need_new
, ubbi
);
4900 last_interesting_insn
= insn
;
4903 bitmap_clear (&reachable
);
4904 if (!last_interesting_insn
)
4906 bitmap_clear (&need_new
);
4910 call_dom
= nearest_common_dominator_for_set (CDI_DOMINATORS
, &need_new
);
4911 bitmap_clear (&need_new
);
4912 if (call_dom
== first
)
4915 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
);
4916 while (bb_loop_depth (call_dom
) > 0)
4917 call_dom
= get_immediate_dominator (CDI_DOMINATORS
, call_dom
);
4918 loop_optimizer_finalize ();
4920 if (call_dom
== first
)
4923 calculate_dominance_info (CDI_POST_DOMINATORS
);
4924 if (dominated_by_p (CDI_POST_DOMINATORS
, first
, call_dom
))
4926 free_dominance_info (CDI_POST_DOMINATORS
);
4929 free_dominance_info (CDI_POST_DOMINATORS
);
4932 fprintf (dump_file
, "Will split live ranges of parameters at BB %i\n",
4936 FOR_BB_INSNS (first
, insn
)
4938 rtx dest
= interesting_dest_for_shprep (insn
, call_dom
);
4939 if (!dest
|| dest
== pic_offset_table_rtx
)
4942 rtx newreg
= NULL_RTX
;
4944 for (use
= DF_REG_USE_CHAIN (REGNO (dest
)); use
; use
= next
)
4946 rtx_insn
*uin
= DF_REF_INSN (use
);
4947 next
= DF_REF_NEXT_REG (use
);
4949 basic_block ubb
= BLOCK_FOR_INSN (uin
);
4951 || dominated_by_p (CDI_DOMINATORS
, ubb
, call_dom
))
4954 newreg
= ira_create_new_reg (dest
);
4955 validate_change (uin
, DF_REF_REAL_LOC (use
), newreg
, true);
4961 rtx new_move
= gen_move_insn (newreg
, dest
);
4962 emit_insn_after (new_move
, bb_note (call_dom
));
4965 fprintf (dump_file
, "Split live-range of register ");
4966 print_rtl_single (dump_file
, dest
);
4971 if (insn
== last_interesting_insn
)
4974 apply_change_group ();
4978 /* Perform the second half of the transformation started in
4979 find_moveable_pseudos. We look for instances where the newly introduced
4980 pseudo remains unallocated, and remove it by moving the definition to
4981 just before its use, replacing the move instruction generated by
4982 find_moveable_pseudos. */
4984 move_unallocated_pseudos (void)
4987 for (i
= first_moveable_pseudo
; i
< last_moveable_pseudo
; i
++)
4988 if (reg_renumber
[i
] < 0)
4990 int idx
= i
- first_moveable_pseudo
;
4991 rtx other_reg
= pseudo_replaced_reg
[idx
];
4992 rtx_insn
*def_insn
= DF_REF_INSN (DF_REG_DEF_CHAIN (i
));
4993 /* The use must follow all definitions of OTHER_REG, so we can
4994 insert the new definition immediately after any of them. */
4995 df_ref other_def
= DF_REG_DEF_CHAIN (REGNO (other_reg
));
4996 rtx_insn
*move_insn
= DF_REF_INSN (other_def
);
4997 rtx_insn
*newinsn
= emit_insn_after (PATTERN (def_insn
), move_insn
);
5002 fprintf (dump_file
, "moving def of %d (insn %d now) ",
5003 REGNO (other_reg
), INSN_UID (def_insn
));
5005 delete_insn (move_insn
);
5006 while ((other_def
= DF_REG_DEF_CHAIN (REGNO (other_reg
))))
5007 delete_insn (DF_REF_INSN (other_def
));
5008 delete_insn (def_insn
);
5010 set
= single_set (newinsn
);
5011 success
= validate_change (newinsn
, &SET_DEST (set
), other_reg
, 0);
5012 gcc_assert (success
);
5014 fprintf (dump_file
, " %d) rather than keep unallocated replacement %d\n",
5015 INSN_UID (newinsn
), i
);
5016 SET_REG_N_REFS (i
, 0);
5020 /* If the backend knows where to allocate pseudos for hard
5021 register initial values, register these allocations now. */
5023 allocate_initial_values (void)
5025 if (targetm
.allocate_initial_value
)
5030 for (i
= 0; HARD_REGISTER_NUM_P (i
); i
++)
5032 if (! initial_value_entry (i
, &hreg
, &preg
))
5035 x
= targetm
.allocate_initial_value (hreg
);
5036 regno
= REGNO (preg
);
5037 if (x
&& REG_N_SETS (regno
) <= 1)
5040 reg_equiv_memory_loc (regno
) = x
;
5046 gcc_assert (REG_P (x
));
5047 new_regno
= REGNO (x
);
5048 reg_renumber
[regno
] = new_regno
;
5049 /* Poke the regno right into regno_reg_rtx so that even
5050 fixed regs are accepted. */
5051 SET_REGNO (preg
, new_regno
);
5052 /* Update global register liveness information. */
5053 FOR_EACH_BB_FN (bb
, cfun
)
5055 if (REGNO_REG_SET_P (df_get_live_in (bb
), regno
))
5056 SET_REGNO_REG_SET (df_get_live_in (bb
), new_regno
);
5057 if (REGNO_REG_SET_P (df_get_live_out (bb
), regno
))
5058 SET_REGNO_REG_SET (df_get_live_out (bb
), new_regno
);
5064 gcc_checking_assert (! initial_value_entry (FIRST_PSEUDO_REGISTER
,
5070 /* True when we use LRA instead of reload pass for the current
5074 /* True if we have allocno conflicts. It is false for non-optimized
5075 mode or when the conflict table is too big. */
5076 bool ira_conflicts_p
;
5078 /* Saved between IRA and reload. */
5079 static int saved_flag_ira_share_spill_slots
;
5081 /* This is the main entry of IRA. */
5086 int ira_max_point_before_emit
;
5088 bool saved_flag_caller_saves
= flag_caller_saves
;
5089 enum ira_region saved_flag_ira_region
= flag_ira_region
;
5091 /* Perform target specific PIC register initialization. */
5092 targetm
.init_pic_reg ();
5094 ira_conflicts_p
= optimize
> 0;
5096 ira_use_lra_p
= targetm
.lra_p ();
5097 /* If there are too many pseudos and/or basic blocks (e.g. 10K
5098 pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
5099 use simplified and faster algorithms in LRA. */
5102 && max_reg_num () >= (1 << 26) / last_basic_block_for_fn (cfun
));
5105 /* It permits to skip live range splitting in LRA. */
5106 flag_caller_saves
= false;
5107 /* There is no sense to do regional allocation when we use
5109 flag_ira_region
= IRA_REGION_ONE
;
5110 ira_conflicts_p
= false;
5113 #ifndef IRA_NO_OBSTACK
5114 gcc_obstack_init (&ira_obstack
);
5116 bitmap_obstack_initialize (&ira_bitmap_obstack
);
5118 /* LRA uses its own infrastructure to handle caller save registers. */
5119 if (flag_caller_saves
&& !ira_use_lra_p
)
5120 init_caller_save ();
5122 if (flag_ira_verbose
< 10)
5124 internal_flag_ira_verbose
= flag_ira_verbose
;
5129 internal_flag_ira_verbose
= flag_ira_verbose
- 10;
5130 ira_dump_file
= stderr
;
5133 setup_prohibited_mode_move_regs ();
5134 decrease_live_ranges_number ();
5135 df_note_add_problem ();
5137 /* DF_LIVE can't be used in the register allocator, too many other
5138 parts of the compiler depend on using the "classic" liveness
5139 interpretation of the DF_LR problem. See PR38711.
5140 Remove the problem, so that we don't spend time updating it in
5141 any of the df_analyze() calls during IRA/LRA. */
5143 df_remove_problem (df_live
);
5144 gcc_checking_assert (df_live
== NULL
);
5146 #ifdef ENABLE_CHECKING
5147 df
->changeable_flags
|= DF_VERIFY_SCHEDULED
;
5152 if (ira_conflicts_p
)
5154 calculate_dominance_info (CDI_DOMINATORS
);
5156 if (split_live_ranges_for_shrink_wrap ())
5159 free_dominance_info (CDI_DOMINATORS
);
5162 df_clear_flags (DF_NO_INSN_RESCAN
);
5164 regstat_init_n_sets_and_refs ();
5165 regstat_compute_ri ();
5167 /* If we are not optimizing, then this is the only place before
5168 register allocation where dataflow is done. And that is needed
5169 to generate these warnings. */
5171 generate_setjmp_warnings ();
5173 /* Determine if the current function is a leaf before running IRA
5174 since this can impact optimizations done by the prologue and
5175 epilogue thus changing register elimination offsets. */
5176 crtl
->is_leaf
= leaf_function_p ();
5178 if (resize_reg_info () && flag_ira_loop_pressure
)
5179 ira_set_pseudo_classes (true, ira_dump_file
);
5181 rebuild_p
= update_equiv_regs ();
5183 setup_reg_equiv_init ();
5185 if (optimize
&& rebuild_p
)
5187 timevar_push (TV_JUMP
);
5188 rebuild_jump_labels (get_insns ());
5189 if (purge_all_dead_edges ())
5190 delete_unreachable_blocks ();
5191 timevar_pop (TV_JUMP
);
5194 allocated_reg_info_size
= max_reg_num ();
5196 if (delete_trivially_dead_insns (get_insns (), max_reg_num ()))
5199 /* It is not worth to do such improvement when we use a simple
5200 allocation because of -O0 usage or because the function is too
5202 if (ira_conflicts_p
)
5203 find_moveable_pseudos ();
5205 max_regno_before_ira
= max_reg_num ();
5206 ira_setup_eliminable_regset ();
5208 ira_overall_cost
= ira_reg_cost
= ira_mem_cost
= 0;
5209 ira_load_cost
= ira_store_cost
= ira_shuffle_cost
= 0;
5210 ira_move_loops_num
= ira_additional_jumps_num
= 0;
5212 ira_assert (current_loops
== NULL
);
5213 if (flag_ira_region
== IRA_REGION_ALL
|| flag_ira_region
== IRA_REGION_MIXED
)
5214 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
| LOOPS_HAVE_RECORDED_EXITS
);
5216 if (internal_flag_ira_verbose
> 0 && ira_dump_file
!= NULL
)
5217 fprintf (ira_dump_file
, "Building IRA IR\n");
5218 loops_p
= ira_build ();
5220 ira_assert (ira_conflicts_p
|| !loops_p
);
5222 saved_flag_ira_share_spill_slots
= flag_ira_share_spill_slots
;
5223 if (too_high_register_pressure_p () || cfun
->calls_setjmp
)
5224 /* It is just wasting compiler's time to pack spilled pseudos into
5225 stack slots in this case -- prohibit it. We also do this if
5226 there is setjmp call because a variable not modified between
5227 setjmp and longjmp the compiler is required to preserve its
5228 value and sharing slots does not guarantee it. */
5229 flag_ira_share_spill_slots
= FALSE
;
5233 ira_max_point_before_emit
= ira_max_point
;
5235 ira_initiate_emit_data ();
5239 max_regno
= max_reg_num ();
5240 if (ira_conflicts_p
)
5244 if (! ira_use_lra_p
)
5245 ira_initiate_assign ();
5254 ira_allocno_iterator ai
;
5256 FOR_EACH_ALLOCNO (a
, ai
)
5257 ALLOCNO_REGNO (a
) = REGNO (ALLOCNO_EMIT_DATA (a
)->reg
);
5261 if (internal_flag_ira_verbose
> 0 && ira_dump_file
!= NULL
)
5262 fprintf (ira_dump_file
, "Flattening IR\n");
5263 ira_flattening (max_regno_before_ira
, ira_max_point_before_emit
);
5265 /* New insns were generated: add notes and recalculate live
5269 /* ??? Rebuild the loop tree, but why? Does the loop tree
5270 change if new insns were generated? Can that be handled
5271 by updating the loop tree incrementally? */
5272 loop_optimizer_finalize ();
5273 free_dominance_info (CDI_DOMINATORS
);
5274 loop_optimizer_init (AVOID_CFG_MODIFICATIONS
5275 | LOOPS_HAVE_RECORDED_EXITS
);
5277 if (! ira_use_lra_p
)
5279 setup_allocno_assignment_flags ();
5280 ira_initiate_assign ();
5281 ira_reassign_conflict_allocnos (max_regno
);
5286 ira_finish_emit_data ();
5288 setup_reg_renumber ();
5290 calculate_allocation_cost ();
5292 #ifdef ENABLE_IRA_CHECKING
5293 if (ira_conflicts_p
)
5294 check_allocation ();
5297 if (max_regno
!= max_regno_before_ira
)
5299 regstat_free_n_sets_and_refs ();
5301 regstat_init_n_sets_and_refs ();
5302 regstat_compute_ri ();
5305 overall_cost_before
= ira_overall_cost
;
5306 if (! ira_conflicts_p
)
5310 fix_reg_equiv_init ();
5312 #ifdef ENABLE_IRA_CHECKING
5313 print_redundant_copies ();
5315 if (! ira_use_lra_p
)
5317 ira_spilled_reg_stack_slots_num
= 0;
5318 ira_spilled_reg_stack_slots
5319 = ((struct ira_spilled_reg_stack_slot
*)
5320 ira_allocate (max_regno
5321 * sizeof (struct ira_spilled_reg_stack_slot
)));
5322 memset (ira_spilled_reg_stack_slots
, 0,
5323 max_regno
* sizeof (struct ira_spilled_reg_stack_slot
));
5326 allocate_initial_values ();
5328 /* See comment for find_moveable_pseudos call. */
5329 if (ira_conflicts_p
)
5330 move_unallocated_pseudos ();
5332 /* Restore original values. */
5335 flag_caller_saves
= saved_flag_caller_saves
;
5336 flag_ira_region
= saved_flag_ira_region
;
5345 unsigned pic_offset_table_regno
= INVALID_REGNUM
;
5347 if (flag_ira_verbose
< 10)
5348 ira_dump_file
= dump_file
;
5350 /* If pic_offset_table_rtx is a pseudo register, then keep it so
5351 after reload to avoid possible wrong usages of hard reg assigned
5353 if (pic_offset_table_rtx
5354 && REGNO (pic_offset_table_rtx
) >= FIRST_PSEUDO_REGISTER
)
5355 pic_offset_table_regno
= REGNO (pic_offset_table_rtx
);
5357 timevar_push (TV_RELOAD
);
5360 if (current_loops
!= NULL
)
5362 loop_optimizer_finalize ();
5363 free_dominance_info (CDI_DOMINATORS
);
5365 FOR_ALL_BB_FN (bb
, cfun
)
5366 bb
->loop_father
= NULL
;
5367 current_loops
= NULL
;
5371 lra (ira_dump_file
);
5372 /* ???!!! Move it before lra () when we use ira_reg_equiv in
5374 vec_free (reg_equivs
);
5380 df_set_flags (DF_NO_INSN_RESCAN
);
5381 build_insn_chain ();
5383 need_dce
= reload (get_insns (), ira_conflicts_p
);
5387 timevar_pop (TV_RELOAD
);
5389 timevar_push (TV_IRA
);
5391 if (ira_conflicts_p
&& ! ira_use_lra_p
)
5393 ira_free (ira_spilled_reg_stack_slots
);
5394 ira_finish_assign ();
5397 if (internal_flag_ira_verbose
> 0 && ira_dump_file
!= NULL
5398 && overall_cost_before
!= ira_overall_cost
)
5399 fprintf (ira_dump_file
, "+++Overall after reload %d\n", ira_overall_cost
);
5401 flag_ira_share_spill_slots
= saved_flag_ira_share_spill_slots
;
5403 if (! ira_use_lra_p
)
5406 if (current_loops
!= NULL
)
5408 loop_optimizer_finalize ();
5409 free_dominance_info (CDI_DOMINATORS
);
5411 FOR_ALL_BB_FN (bb
, cfun
)
5412 bb
->loop_father
= NULL
;
5413 current_loops
= NULL
;
5416 regstat_free_n_sets_and_refs ();
5420 cleanup_cfg (CLEANUP_EXPENSIVE
);
5422 finish_reg_equiv ();
5424 bitmap_obstack_release (&ira_bitmap_obstack
);
5425 #ifndef IRA_NO_OBSTACK
5426 obstack_free (&ira_obstack
, NULL
);
5429 /* The code after the reload has changed so much that at this point
5430 we might as well just rescan everything. Note that
5431 df_rescan_all_insns is not going to help here because it does not
5432 touch the artificial uses and defs. */
5433 df_finish_pass (true);
5434 df_scan_alloc (NULL
);
5439 df_live_add_problem ();
5440 df_live_set_all_dirty ();
5446 if (need_dce
&& optimize
)
5449 /* Diagnose uses of the hard frame pointer when it is used as a global
5450 register. Often we can get away with letting the user appropriate
5451 the frame pointer, but we should let them know when code generation
5452 makes that impossible. */
5453 if (global_regs
[HARD_FRAME_POINTER_REGNUM
] && frame_pointer_needed
)
5455 tree decl
= global_regs_decl
[HARD_FRAME_POINTER_REGNUM
];
5456 error_at (DECL_SOURCE_LOCATION (current_function_decl
),
5457 "frame pointer required, but reserved");
5458 inform (DECL_SOURCE_LOCATION (decl
), "for %qD", decl
);
5461 if (pic_offset_table_regno
!= INVALID_REGNUM
)
5462 pic_offset_table_rtx
= gen_rtx_REG (Pmode
, pic_offset_table_regno
);
5464 timevar_pop (TV_IRA
);
5467 /* Run the integrated register allocator. */
5471 const pass_data pass_data_ira
=
5473 RTL_PASS
, /* type */
5475 OPTGROUP_NONE
, /* optinfo_flags */
5477 0, /* properties_required */
5478 0, /* properties_provided */
5479 0, /* properties_destroyed */
5480 0, /* todo_flags_start */
5481 TODO_do_not_ggc_collect
, /* todo_flags_finish */
5484 class pass_ira
: public rtl_opt_pass
5487 pass_ira (gcc::context
*ctxt
)
5488 : rtl_opt_pass (pass_data_ira
, ctxt
)
5491 /* opt_pass methods: */
5492 virtual unsigned int execute (function
*)
5498 }; // class pass_ira
5503 make_pass_ira (gcc::context
*ctxt
)
5505 return new pass_ira (ctxt
);
5510 const pass_data pass_data_reload
=
5512 RTL_PASS
, /* type */
5513 "reload", /* name */
5514 OPTGROUP_NONE
, /* optinfo_flags */
5515 TV_RELOAD
, /* tv_id */
5516 0, /* properties_required */
5517 0, /* properties_provided */
5518 0, /* properties_destroyed */
5519 0, /* todo_flags_start */
5520 0, /* todo_flags_finish */
5523 class pass_reload
: public rtl_opt_pass
5526 pass_reload (gcc::context
*ctxt
)
5527 : rtl_opt_pass (pass_data_reload
, ctxt
)
5530 /* opt_pass methods: */
5531 virtual unsigned int execute (function
*)
5537 }; // class pass_reload
5542 make_pass_reload (gcc::context
*ctxt
)
5544 return new pass_reload (ctxt
);