]> git.ipfire.org Git - thirdparty/gcc.git/blob - gcc/cp/module.cc
c++/modules: Improve diagnostic when redeclaring builtin in module [PR102345]
[thirdparty/gcc.git] / gcc / cp / module.cc
1 /* C++ modules. Experimental!
2 Copyright (C) 2017-2024 Free Software Foundation, Inc.
3 Written by Nathan Sidwell <nathan@acm.org> while at FaceBook
4
5 This file is part of GCC.
6
7 GCC is free software; you can redistribute it and/or modify it
8 under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3, or (at your option)
10 any later version.
11
12 GCC is distributed in the hope that it will be useful, but
13 WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
20
21 /* Comments in this file have a non-negligible chance of being wrong
22 or at least inaccurate. Due to (a) my misunderstanding, (b)
23 ambiguities that I have interpretted differently to original intent
24 (c) changes in the specification, (d) my poor wording, (e) source
25 changes. */
26
27 /* (Incomplete) Design Notes
28
29 A hash table contains all module names. Imported modules are
30 present in a modules array, which by construction places an
31 import's dependencies before the import itself. The single
32 exception is the current TU, which always occupies slot zero (even
33 when it is not a module).
34
35 Imported decls occupy an entity_ary, an array of binding_slots, indexed
36 by importing module and index within that module. A flat index is
37 used, as each module reserves a contiguous range of indices.
38 Initially each slot indicates the CMI section containing the
39 streamed decl. When the decl is imported it will point to the decl
40 itself.
41
42 Additionally each imported decl is mapped in the entity_map via its
43 DECL_UID to the flat index in the entity_ary. Thus we can locate
44 the index for any imported decl by using this map and then
45 de-flattening the index via a binary seach of the module vector.
46 Cross-module references are by (remapped) module number and
47 module-local index.
48
49 Each importable DECL contains several flags. The simple set are
50 DECL_MODULE_EXPORT_P, DECL_MODULE_PURVIEW_P, DECL_MODULE_ATTACH_P
51 and DECL_MODULE_IMPORT_P. The first indicates whether it is
52 exported, the second whether it is in module or header-unit
53 purview. The third indicates it is attached to the named module in
54 whose purview it resides and the fourth indicates whether it was an
55 import into this TU or not. DECL_MODULE_ATTACH_P will be false for
56 all decls in a header-unit, and for those in a named module inside
57 a linkage declaration.
58
59 The more detailed flags are DECL_MODULE_PARTITION_P,
60 DECL_MODULE_ENTITY_P. The first is set in a primary interface unit
61 on decls that were read from module partitions (these will have
62 DECL_MODULE_IMPORT_P set too). Such decls will be streamed out to
63 the primary's CMI. DECL_MODULE_ENTITY_P is set when an entity is
64 imported, even if it matched a non-imported entity. Such a decl
65 will not have DECL_MODULE_IMPORT_P set, even though it has an entry
66 in the entity map and array.
67
68 Header units are module-like.
69
70 For namespace-scope lookup, the decls for a particular module are
71 held located in a sparse array hanging off the binding of the name.
72 This is partitioned into two: a few fixed slots at the start
73 followed by the sparse slots afterwards. By construction we only
74 need to append new slots to the end -- there is never a need to
75 insert in the middle. The fixed slots are MODULE_SLOT_CURRENT for
76 the current TU (regardless of whether it is a module or not),
77 MODULE_SLOT_GLOBAL and MODULE_SLOT_PARTITION. These latter two
78 slots are used for merging entities across the global module and
79 module partitions respectively. MODULE_SLOT_PARTITION is only
80 present in a module. Neither of those two slots is searched during
81 name lookup -- they are internal use only. This vector is created
82 lazily once we require it, if there is only a declaration from the
83 current TU, a regular binding is present. It is converted on
84 demand.
85
86 OPTIMIZATION: Outside of the current TU, we only need ADL to work.
87 We could optimize regular lookup for the current TU by glomming all
88 the visible decls on its slot. Perhaps wait until design is a
89 little more settled though.
90
91 There is only one instance of each extern-linkage namespace. It
92 appears in every module slot that makes it visible. It also
93 appears in MODULE_SLOT_GLOBAL. (It is an ODR violation if they
94 collide with some other global module entity.) We also have an
95 optimization that shares the slot for adjacent modules that declare
96 the same such namespace.
97
98 A module interface compilation produces a Compiled Module Interface
99 (CMI). The format used is Encapsulated Lazy Records Of Numbered
100 Declarations, which is essentially ELF's section encapsulation. (As
101 all good nerds are aware, Elrond is half Elf.) Some sections are
102 named, and contain information about the module as a whole (indices
103 etc), and other sections are referenced by number. Although I
104 don't defend against actively hostile CMIs, there is some
105 checksumming involved to verify data integrity. When dumping out
106 an interface, we generate a graph of all the
107 independently-redeclarable DECLS that are needed, and the decls
108 they reference. From that we determine the strongly connected
109 components (SCC) within this TU. Each SCC is dumped to a separate
110 numbered section of the CMI. We generate a binding table section,
111 mapping each namespace&name to a defining section. This allows
112 lazy loading.
113
114 Lazy loading employs mmap to map a read-only image of the CMI.
115 It thus only occupies address space and is paged in on demand,
116 backed by the CMI file itself. If mmap is unavailable, regular
117 FILEIO is used. Also, there's a bespoke ELF reader/writer here,
118 which implements just the section table and sections (including
119 string sections) of a 32-bit ELF in host byte-order. You can of
120 course inspect it with readelf. I figured 32-bit is sufficient,
121 for a single module. I detect running out of section numbers, but
122 do not implement the ELF overflow mechanism. At least you'll get
123 an error if that happens.
124
125 We do not separate declarations and definitions. My guess is that
126 if you refer to the declaration, you'll also need the definition
127 (template body, inline function, class definition etc). But this
128 does mean we can get larger SCCs than if we separated them. It is
129 unclear whether this is a win or not.
130
131 Notice that we embed section indices into the contents of other
132 sections. Thus random manipulation of the CMI file by ELF tools
133 may well break it. The kosher way would probably be to introduce
134 indirection via section symbols, but that would require defining a
135 relocation type.
136
137 Notice that lazy loading of one module's decls can cause lazy
138 loading of other decls in the same or another module. Clearly we
139 want to avoid loops. In a correct program there can be no loops in
140 the module dependency graph, and the above-mentioned SCC algorithm
141 places all intra-module circular dependencies in the same SCC. It
142 also orders the SCCs wrt each other, so dependent SCCs come first.
143 As we load dependent modules first, we know there can be no
144 reference to a higher-numbered module, and because we write out
145 dependent SCCs first, likewise for SCCs within the module. This
146 allows us to immediately detect broken references. When loading,
147 we must ensure the rest of the compiler doesn't cause some
148 unconnected load to occur (for instance, instantiate a template).
149
150 Classes used:
151
152 dumper - logger
153
154 data - buffer
155
156 bytes_in : data - scalar reader
157 bytes_out : data - scalar writer
158
159 bytes_in::bits_in - bit stream reader
160 bytes_out::bits_out - bit stream writer
161
162 elf - ELROND format
163 elf_in : elf - ELROND reader
164 elf_out : elf - ELROND writer
165
166 trees_in : bytes_in - tree reader
167 trees_out : bytes_out - tree writer
168
169 depset - dependency set
170 depset::hash - hash table of depsets
171 depset::tarjan - SCC determinator
172
173 uidset<T> - set T's related to a UID
174 uidset<T>::hash hash table of uidset<T>
175
176 loc_spans - location map data
177
178 module_state - module object
179
180 slurping - data needed during loading
181
182 macro_import - imported macro data
183 macro_export - exported macro data
184
185 The ELROND objects use mmap, for both reading and writing. If mmap
186 is unavailable, fileno IO is used to read and write blocks of data.
187
188 The mapper object uses fileno IO to communicate with the server or
189 program. */
190
191 /* In expermental (trunk) sources, MODULE_VERSION is a #define passed
192 in from the Makefile. It records the modification date of the
193 source directory -- that's the only way to stay sane. In release
194 sources, we (plan to) use the compiler's major.minor versioning.
195 While the format might not change between at minor versions, it
196 seems simplest to tie the two together. There's no concept of
197 inter-version compatibility. */
198 #define IS_EXPERIMENTAL(V) ((V) >= (1U << 20))
199 #define MODULE_MAJOR(V) ((V) / 10000)
200 #define MODULE_MINOR(V) ((V) % 10000)
201 #define EXPERIMENT(A,B) (IS_EXPERIMENTAL (MODULE_VERSION) ? (A) : (B))
202 #ifndef MODULE_VERSION
203 #include "bversion.h"
204 #define MODULE_VERSION (BUILDING_GCC_MAJOR * 10000U + BUILDING_GCC_MINOR)
205 #elif !IS_EXPERIMENTAL (MODULE_VERSION)
206 #error "This is not the version I was looking for."
207 #endif
208
209 #define _DEFAULT_SOURCE 1 /* To get TZ field of struct tm, if available. */
210 #include "config.h"
211 #define INCLUDE_MEMORY
212 #define INCLUDE_STRING
213 #define INCLUDE_VECTOR
214 #include "system.h"
215 #include "coretypes.h"
216 #include "cp-tree.h"
217 #include "timevar.h"
218 #include "stringpool.h"
219 #include "dumpfile.h"
220 #include "bitmap.h"
221 #include "cgraph.h"
222 #include "varasm.h"
223 #include "tree-iterator.h"
224 #include "cpplib.h"
225 #include "mkdeps.h"
226 #include "incpath.h"
227 #include "libiberty.h"
228 #include "stor-layout.h"
229 #include "version.h"
230 #include "tree-diagnostic.h"
231 #include "toplev.h"
232 #include "opts.h"
233 #include "attribs.h"
234 #include "intl.h"
235 #include "langhooks.h"
236 /* This TU doesn't need or want to see the networking. */
237 #define CODY_NETWORKING 0
238 #include "mapper-client.h"
239 #include <zlib.h> // for crc32, crc32_combine
240
241 #if 0 // 1 for testing no mmap
242 #define MAPPED_READING 0
243 #define MAPPED_WRITING 0
244 #else
245 #if HAVE_MMAP_FILE && _POSIX_MAPPED_FILES > 0
246 /* mmap, munmap. */
247 #define MAPPED_READING 1
248 #if HAVE_SYSCONF && defined (_SC_PAGE_SIZE)
249 /* msync, sysconf (_SC_PAGE_SIZE), ftruncate */
250 /* posix_fallocate used if available. */
251 #define MAPPED_WRITING 1
252 #else
253 #define MAPPED_WRITING 0
254 #endif
255 #else
256 #define MAPPED_READING 0
257 #define MAPPED_WRITING 0
258 #endif
259 #endif
260
261 /* Some open(2) flag differences, what a colourful world it is! */
262 #if defined (O_CLOEXEC)
263 // OK
264 #elif defined (_O_NOINHERIT)
265 /* Windows' _O_NOINHERIT matches O_CLOEXEC flag */
266 #define O_CLOEXEC _O_NOINHERIT
267 #else
268 #define O_CLOEXEC 0
269 #endif
270 #if defined (O_BINARY)
271 // Ok?
272 #elif defined (_O_BINARY)
273 /* Windows' open(2) call defaults to text! */
274 #define O_BINARY _O_BINARY
275 #else
276 #define O_BINARY 0
277 #endif
278
279 static inline cpp_hashnode *cpp_node (tree id)
280 {
281 return CPP_HASHNODE (GCC_IDENT_TO_HT_IDENT (id));
282 }
283
284 static inline tree identifier (const cpp_hashnode *node)
285 {
286 /* HT_NODE() expands to node->ident that HT_IDENT_TO_GCC_IDENT()
287 then subtracts a nonzero constant, deriving a pointer to
288 a different member than ident. That's strictly undefined
289 and detected by -Warray-bounds. Suppress it. See PR 101372. */
290 #pragma GCC diagnostic push
291 #pragma GCC diagnostic ignored "-Warray-bounds"
292 return HT_IDENT_TO_GCC_IDENT (HT_NODE (const_cast<cpp_hashnode *> (node)));
293 #pragma GCC diagnostic pop
294 }
295
296 /* Id for dumping module information. */
297 int module_dump_id;
298
299 /* We have a special module owner. */
300 #define MODULE_UNKNOWN (~0U) /* Not yet known. */
301
302 /* Prefix for section names. */
303 #define MOD_SNAME_PFX ".gnu.c++"
304
305 /* Format a version for user consumption. */
306
307 typedef char verstr_t[32];
308 static void
309 version2string (unsigned version, verstr_t &out)
310 {
311 unsigned major = MODULE_MAJOR (version);
312 unsigned minor = MODULE_MINOR (version);
313
314 if (IS_EXPERIMENTAL (version))
315 sprintf (out, "%04u/%02u/%02u-%02u:%02u%s",
316 2000 + major / 10000, (major / 100) % 100, (major % 100),
317 minor / 100, minor % 100,
318 EXPERIMENT ("", " (experimental)"));
319 else
320 sprintf (out, "%u.%u", major, minor);
321 }
322
323 /* Include files to note translation for. */
324 static vec<const char *, va_heap, vl_embed> *note_includes;
325
326 /* Modules to note CMI pathames. */
327 static vec<const char *, va_heap, vl_embed> *note_cmis;
328
329 /* Traits to hash an arbitrary pointer. Entries are not deletable,
330 and removal is a noop (removal needed upon destruction). */
331 template <typename T>
332 struct nodel_ptr_hash : pointer_hash<T>, typed_noop_remove <T *> {
333 /* Nothing is deletable. Everything is insertable. */
334 static bool is_deleted (T *) { return false; }
335 static void mark_deleted (T *) { gcc_unreachable (); }
336 };
337
338 /* Map from pointer to signed integer. */
339 typedef simple_hashmap_traits<nodel_ptr_hash<void>, int> ptr_int_traits;
340 typedef hash_map<void *,signed,ptr_int_traits> ptr_int_hash_map;
341
342 /********************************************************************/
343 /* Basic streaming & ELF. Serialization is usually via mmap. For
344 writing we slide a buffer over the output file, syncing it
345 approproiately. For reading we simply map the whole file (as a
346 file-backed read-only map -- it's just address space, leaving the
347 OS pager to deal with getting the data to us). Some buffers need
348 to be more conventional malloc'd contents. */
349
350 /* Variable length buffer. */
351
352 namespace {
353 class data {
354 public:
355 class allocator {
356 public:
357 /* Tools tend to moan if the dtor's not virtual. */
358 virtual ~allocator () {}
359
360 public:
361 void grow (data &obj, unsigned needed, bool exact);
362 void shrink (data &obj);
363
364 public:
365 virtual char *grow (char *ptr, unsigned needed);
366 virtual void shrink (char *ptr);
367 };
368
369 public:
370 char *buffer; /* Buffer being transferred. */
371 /* Although size_t would be the usual size, we know we never get
372 more than 4GB of buffer -- because that's the limit of the
373 encapsulation format. And if you need bigger imports, you're
374 doing it wrong. */
375 unsigned size; /* Allocated size of buffer. */
376 unsigned pos; /* Position in buffer. */
377
378 public:
379 data ()
380 :buffer (NULL), size (0), pos (0)
381 {
382 }
383 ~data ()
384 {
385 /* Make sure the derived and/or using class know what they're
386 doing. */
387 gcc_checking_assert (!buffer);
388 }
389
390 protected:
391 char *use (unsigned count)
392 {
393 if (size < pos + count)
394 return NULL;
395 char *res = &buffer[pos];
396 pos += count;
397 return res;
398 }
399
400 unsigned calc_crc (unsigned) const;
401
402 public:
403 void unuse (unsigned count)
404 {
405 pos -= count;
406 }
407
408 public:
409 static allocator simple_memory;
410 };
411 } // anon namespace
412
413 /* The simple data allocator. */
414 data::allocator data::simple_memory;
415
416 /* Grow buffer to at least size NEEDED. */
417
418 void
419 data::allocator::grow (data &obj, unsigned needed, bool exact)
420 {
421 gcc_checking_assert (needed ? needed > obj.size : !obj.size);
422 if (!needed)
423 /* Pick a default size. */
424 needed = EXPERIMENT (100, 1000);
425
426 if (!exact)
427 needed *= 2;
428 obj.buffer = grow (obj.buffer, needed);
429 if (obj.buffer)
430 obj.size = needed;
431 else
432 obj.pos = obj.size = 0;
433 }
434
435 /* Free a buffer. */
436
437 void
438 data::allocator::shrink (data &obj)
439 {
440 shrink (obj.buffer);
441 obj.buffer = NULL;
442 obj.size = 0;
443 }
444
445 char *
446 data::allocator::grow (char *ptr, unsigned needed)
447 {
448 return XRESIZEVAR (char, ptr, needed);
449 }
450
451 void
452 data::allocator::shrink (char *ptr)
453 {
454 XDELETEVEC (ptr);
455 }
456
457 /* Calculate the crc32 of the buffer. Note the CRC is stored in the
458 first 4 bytes, so don't include them. */
459
460 unsigned
461 data::calc_crc (unsigned l) const
462 {
463 return crc32 (0, (unsigned char *)buffer + 4, l - 4);
464 }
465
466 class elf_in;
467
468 /* Byte stream reader. */
469
470 namespace {
471 class bytes_in : public data {
472 typedef data parent;
473
474 protected:
475 bool overrun; /* Sticky read-too-much flag. */
476
477 public:
478 bytes_in ()
479 : parent (), overrun (false)
480 {
481 }
482 ~bytes_in ()
483 {
484 }
485
486 public:
487 /* Begin reading a named section. */
488 bool begin (location_t loc, elf_in *src, const char *name);
489 /* Begin reading a numbered section with optional name. */
490 bool begin (location_t loc, elf_in *src, unsigned, const char * = NULL);
491 /* Complete reading a buffer. Propagate errors and return true on
492 success. */
493 bool end (elf_in *src);
494 /* Return true if there is unread data. */
495 bool more_p () const
496 {
497 return pos != size;
498 }
499
500 public:
501 /* Start reading at OFFSET. */
502 void random_access (unsigned offset)
503 {
504 if (offset > size)
505 set_overrun ();
506 pos = offset;
507 }
508
509 public:
510 void align (unsigned boundary)
511 {
512 if (unsigned pad = pos & (boundary - 1))
513 read (boundary - pad);
514 }
515
516 public:
517 const char *read (unsigned count)
518 {
519 char *ptr = use (count);
520 if (!ptr)
521 set_overrun ();
522 return ptr;
523 }
524
525 public:
526 bool check_crc () const;
527 /* We store the CRC in the first 4 bytes, using host endianness. */
528 unsigned get_crc () const
529 {
530 return *(const unsigned *)&buffer[0];
531 }
532
533 public:
534 /* Manipulate the overrun flag. */
535 bool get_overrun () const
536 {
537 return overrun;
538 }
539 void set_overrun ()
540 {
541 overrun = true;
542 }
543
544 public:
545 unsigned u32 (); /* Read uncompressed integer. */
546
547 public:
548 int c () ATTRIBUTE_UNUSED; /* Read a char. */
549 int i (); /* Read a signed int. */
550 unsigned u (); /* Read an unsigned int. */
551 size_t z (); /* Read a size_t. */
552 HOST_WIDE_INT wi (); /* Read a HOST_WIDE_INT. */
553 unsigned HOST_WIDE_INT wu (); /* Read an unsigned HOST_WIDE_INT. */
554 const char *str (size_t * = NULL); /* Read a string. */
555 const void *buf (size_t); /* Read a fixed-length buffer. */
556 cpp_hashnode *cpp_node (); /* Read a cpp node. */
557
558 struct bits_in;
559 bits_in stream_bits ();
560 };
561 } // anon namespace
562
563 /* Verify the buffer's CRC is correct. */
564
565 bool
566 bytes_in::check_crc () const
567 {
568 if (size < 4)
569 return false;
570
571 unsigned c_crc = calc_crc (size);
572 if (c_crc != get_crc ())
573 return false;
574
575 return true;
576 }
577
578 class elf_out;
579
580 /* Byte stream writer. */
581
582 namespace {
583 class bytes_out : public data {
584 typedef data parent;
585
586 public:
587 allocator *memory; /* Obtainer of memory. */
588
589 public:
590 bytes_out (allocator *memory)
591 : parent (), memory (memory)
592 {
593 }
594 ~bytes_out ()
595 {
596 }
597
598 public:
599 bool streaming_p () const
600 {
601 return memory != NULL;
602 }
603
604 public:
605 void set_crc (unsigned *crc_ptr);
606
607 public:
608 /* Begin writing, maybe reserve space for CRC. */
609 void begin (bool need_crc = true);
610 /* Finish writing. Spill to section by number. */
611 unsigned end (elf_out *, unsigned, unsigned *crc_ptr = NULL);
612
613 public:
614 void align (unsigned boundary)
615 {
616 if (unsigned pad = pos & (boundary - 1))
617 write (boundary - pad);
618 }
619
620 public:
621 char *write (unsigned count, bool exact = false)
622 {
623 if (size < pos + count)
624 memory->grow (*this, pos + count, exact);
625 return use (count);
626 }
627
628 public:
629 void u32 (unsigned); /* Write uncompressed integer. */
630
631 public:
632 void c (unsigned char) ATTRIBUTE_UNUSED; /* Write unsigned char. */
633 void i (int); /* Write signed int. */
634 void u (unsigned); /* Write unsigned int. */
635 void z (size_t s); /* Write size_t. */
636 void wi (HOST_WIDE_INT); /* Write HOST_WIDE_INT. */
637 void wu (unsigned HOST_WIDE_INT); /* Write unsigned HOST_WIDE_INT. */
638 void str (const char *ptr)
639 {
640 str (ptr, strlen (ptr));
641 }
642 void cpp_node (const cpp_hashnode *node)
643 {
644 str ((const char *)NODE_NAME (node), NODE_LEN (node));
645 }
646 void str (const char *, size_t); /* Write string of known length. */
647 void buf (const void *, size_t); /* Write fixed length buffer. */
648 void *buf (size_t); /* Create a writable buffer */
649
650 struct bits_out;
651 bits_out stream_bits ();
652
653 public:
654 /* Format a NUL-terminated raw string. */
655 void printf (const char *, ...) ATTRIBUTE_PRINTF_2;
656 void print_time (const char *, const tm *, const char *);
657
658 public:
659 /* Dump instrumentation. */
660 static void instrument ();
661
662 protected:
663 /* Instrumentation. */
664 static unsigned spans[4];
665 static unsigned lengths[4];
666 };
667 } // anon namespace
668
669 /* Finish bit packet. Rewind the bytes not used. */
670
671 static unsigned
672 bit_flush (data& bits, uint32_t& bit_val, unsigned& bit_pos)
673 {
674 gcc_assert (bit_pos);
675 unsigned bytes = (bit_pos + 7) / 8;
676 bits.unuse (4 - bytes);
677 bit_pos = 0;
678 bit_val = 0;
679 return bytes;
680 }
681
682 /* Bit stream reader (RAII-enabled). Bools are packed into bytes. You
683 cannot mix bools and non-bools. Use bflush to flush the current stream
684 of bools on demand. Upon destruction bflush is called.
685
686 When reading, we don't know how many bools we'll read in. So read
687 4 bytes-worth, and then rewind when flushing if we didn't need them
688 all. You can't have a block of bools closer than 4 bytes to the
689 end of the buffer.
690
691 Both bits_in and bits_out maintain the necessary state for bit packing,
692 and since these objects are locally constructed the compiler can more
693 easily track their state across consecutive reads/writes and optimize
694 away redundant buffering checks. */
695
696 struct bytes_in::bits_in {
697 bytes_in& in;
698 uint32_t bit_val = 0;
699 unsigned bit_pos = 0;
700
701 bits_in (bytes_in& in)
702 : in (in)
703 { }
704
705 ~bits_in ()
706 {
707 bflush ();
708 }
709
710 bits_in(bits_in&&) = default;
711 bits_in(const bits_in&) = delete;
712 bits_in& operator=(const bits_in&) = delete;
713
714 /* Completed a block of bools. */
715 void bflush ()
716 {
717 if (bit_pos)
718 bit_flush (in, bit_val, bit_pos);
719 }
720
721 /* Read one bit. */
722 bool b ()
723 {
724 if (!bit_pos)
725 bit_val = in.u32 ();
726 bool x = (bit_val >> bit_pos) & 1;
727 bit_pos = (bit_pos + 1) % 32;
728 return x;
729 }
730 };
731
732 /* Factory function for bits_in. */
733
734 bytes_in::bits_in
735 bytes_in::stream_bits ()
736 {
737 return bits_in (*this);
738 }
739
740 /* Bit stream writer (RAII-enabled), counterpart to bits_in. */
741
742 struct bytes_out::bits_out {
743 bytes_out& out;
744 uint32_t bit_val = 0;
745 unsigned bit_pos = 0;
746 char is_set = -1;
747
748 bits_out (bytes_out& out)
749 : out (out)
750 { }
751
752 ~bits_out ()
753 {
754 bflush ();
755 }
756
757 bits_out(bits_out&&) = default;
758 bits_out(const bits_out&) = delete;
759 bits_out& operator=(const bits_out&) = delete;
760
761 /* Completed a block of bools. */
762 void bflush ()
763 {
764 if (bit_pos)
765 {
766 out.u32 (bit_val);
767 out.lengths[2] += bit_flush (out, bit_val, bit_pos);
768 }
769 out.spans[2]++;
770 is_set = -1;
771 }
772
773 /* Write one bit.
774
775 It may be worth optimizing for most bools being zero. Some kind of
776 run-length encoding? */
777 void b (bool x)
778 {
779 if (is_set != x)
780 {
781 is_set = x;
782 out.spans[x]++;
783 }
784 out.lengths[x]++;
785 bit_val |= unsigned (x) << bit_pos++;
786 if (bit_pos == 32)
787 {
788 out.u32 (bit_val);
789 out.lengths[2] += bit_flush (out, bit_val, bit_pos);
790 }
791 }
792 };
793
794 /* Factory function for bits_out. */
795
796 bytes_out::bits_out
797 bytes_out::stream_bits ()
798 {
799 return bits_out (*this);
800 }
801
802 /* Instrumentation. */
803 unsigned bytes_out::spans[4];
804 unsigned bytes_out::lengths[4];
805
806 /* If CRC_PTR non-null, set the CRC of the buffer. Mix the CRC into
807 that pointed to by CRC_PTR. */
808
809 void
810 bytes_out::set_crc (unsigned *crc_ptr)
811 {
812 if (crc_ptr)
813 {
814 gcc_checking_assert (pos >= 4);
815
816 unsigned crc = calc_crc (pos);
817 unsigned accum = *crc_ptr;
818 /* Only mix the existing *CRC_PTR if it is non-zero. */
819 accum = accum ? crc32_combine (accum, crc, pos - 4) : crc;
820 *crc_ptr = accum;
821
822 /* Buffer will be sufficiently aligned. */
823 *(unsigned *)buffer = crc;
824 }
825 }
826
827 /* Exactly 4 bytes. Used internally for bool packing and a few other
828 places. We can't simply use uint32_t because (a) alignment and
829 (b) we need little-endian for the bool streaming rewinding to make
830 sense. */
831
832 void
833 bytes_out::u32 (unsigned val)
834 {
835 if (char *ptr = write (4))
836 {
837 ptr[0] = val;
838 ptr[1] = val >> 8;
839 ptr[2] = val >> 16;
840 ptr[3] = val >> 24;
841 }
842 }
843
844 unsigned
845 bytes_in::u32 ()
846 {
847 unsigned val = 0;
848 if (const char *ptr = read (4))
849 {
850 val |= (unsigned char)ptr[0];
851 val |= (unsigned char)ptr[1] << 8;
852 val |= (unsigned char)ptr[2] << 16;
853 val |= (unsigned char)ptr[3] << 24;
854 }
855
856 return val;
857 }
858
859 /* Chars are unsigned and written as single bytes. */
860
861 void
862 bytes_out::c (unsigned char v)
863 {
864 if (char *ptr = write (1))
865 *ptr = v;
866 }
867
868 int
869 bytes_in::c ()
870 {
871 int v = 0;
872 if (const char *ptr = read (1))
873 v = (unsigned char)ptr[0];
874 return v;
875 }
876
877 /* Ints 7-bit as a byte. Otherwise a 3bit count of following bytes in
878 big-endian form. 4 bits are in the first byte. */
879
880 void
881 bytes_out::i (int v)
882 {
883 if (char *ptr = write (1))
884 {
885 if (v <= 0x3f && v >= -0x40)
886 *ptr = v & 0x7f;
887 else
888 {
889 unsigned bytes = 0;
890 int probe;
891 if (v >= 0)
892 for (probe = v >> 8; probe > 0x7; probe >>= 8)
893 bytes++;
894 else
895 for (probe = v >> 8; probe < -0x8; probe >>= 8)
896 bytes++;
897 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
898 if ((ptr = write (++bytes)))
899 for (; bytes--; v >>= 8)
900 ptr[bytes] = v & 0xff;
901 }
902 }
903 }
904
905 int
906 bytes_in::i ()
907 {
908 int v = 0;
909 if (const char *ptr = read (1))
910 {
911 v = *ptr & 0xff;
912 if (v & 0x80)
913 {
914 unsigned bytes = (v >> 4) & 0x7;
915 v &= 0xf;
916 if (v & 0x8)
917 v |= -1 ^ 0x7;
918 /* unsigned necessary due to left shifts of -ve values. */
919 unsigned uv = unsigned (v);
920 if ((ptr = read (++bytes)))
921 while (bytes--)
922 uv = (uv << 8) | (*ptr++ & 0xff);
923 v = int (uv);
924 }
925 else if (v & 0x40)
926 v |= -1 ^ 0x3f;
927 }
928
929 return v;
930 }
931
932 void
933 bytes_out::u (unsigned v)
934 {
935 if (char *ptr = write (1))
936 {
937 if (v <= 0x7f)
938 *ptr = v;
939 else
940 {
941 unsigned bytes = 0;
942 unsigned probe;
943 for (probe = v >> 8; probe > 0xf; probe >>= 8)
944 bytes++;
945 *ptr = 0x80 | bytes << 4 | probe;
946 if ((ptr = write (++bytes)))
947 for (; bytes--; v >>= 8)
948 ptr[bytes] = v & 0xff;
949 }
950 }
951 }
952
953 unsigned
954 bytes_in::u ()
955 {
956 unsigned v = 0;
957
958 if (const char *ptr = read (1))
959 {
960 v = *ptr & 0xff;
961 if (v & 0x80)
962 {
963 unsigned bytes = (v >> 4) & 0x7;
964 v &= 0xf;
965 if ((ptr = read (++bytes)))
966 while (bytes--)
967 v = (v << 8) | (*ptr++ & 0xff);
968 }
969 }
970
971 return v;
972 }
973
974 void
975 bytes_out::wi (HOST_WIDE_INT v)
976 {
977 if (char *ptr = write (1))
978 {
979 if (v <= 0x3f && v >= -0x40)
980 *ptr = v & 0x7f;
981 else
982 {
983 unsigned bytes = 0;
984 HOST_WIDE_INT probe;
985 if (v >= 0)
986 for (probe = v >> 8; probe > 0x7; probe >>= 8)
987 bytes++;
988 else
989 for (probe = v >> 8; probe < -0x8; probe >>= 8)
990 bytes++;
991 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
992 if ((ptr = write (++bytes)))
993 for (; bytes--; v >>= 8)
994 ptr[bytes] = v & 0xff;
995 }
996 }
997 }
998
999 HOST_WIDE_INT
1000 bytes_in::wi ()
1001 {
1002 HOST_WIDE_INT v = 0;
1003 if (const char *ptr = read (1))
1004 {
1005 v = *ptr & 0xff;
1006 if (v & 0x80)
1007 {
1008 unsigned bytes = (v >> 4) & 0x7;
1009 v &= 0xf;
1010 if (v & 0x8)
1011 v |= -1 ^ 0x7;
1012 /* unsigned necessary due to left shifts of -ve values. */
1013 unsigned HOST_WIDE_INT uv = (unsigned HOST_WIDE_INT) v;
1014 if ((ptr = read (++bytes)))
1015 while (bytes--)
1016 uv = (uv << 8) | (*ptr++ & 0xff);
1017 v = (HOST_WIDE_INT) uv;
1018 }
1019 else if (v & 0x40)
1020 v |= -1 ^ 0x3f;
1021 }
1022
1023 return v;
1024 }
1025
1026 /* unsigned wide ints are just written as signed wide ints. */
1027
1028 inline void
1029 bytes_out::wu (unsigned HOST_WIDE_INT v)
1030 {
1031 wi ((HOST_WIDE_INT) v);
1032 }
1033
1034 inline unsigned HOST_WIDE_INT
1035 bytes_in::wu ()
1036 {
1037 return (unsigned HOST_WIDE_INT) wi ();
1038 }
1039
1040 /* size_t written as unsigned or unsigned wide int. */
1041
1042 inline void
1043 bytes_out::z (size_t s)
1044 {
1045 if (sizeof (s) == sizeof (unsigned))
1046 u (s);
1047 else
1048 wu (s);
1049 }
1050
1051 inline size_t
1052 bytes_in::z ()
1053 {
1054 if (sizeof (size_t) == sizeof (unsigned))
1055 return u ();
1056 else
1057 return wu ();
1058 }
1059
1060 /* Buffer simply memcpied. */
1061 void *
1062 bytes_out::buf (size_t len)
1063 {
1064 align (sizeof (void *) * 2);
1065 return write (len);
1066 }
1067
1068 void
1069 bytes_out::buf (const void *src, size_t len)
1070 {
1071 if (void *ptr = buf (len))
1072 memcpy (ptr, src, len);
1073 }
1074
1075 const void *
1076 bytes_in::buf (size_t len)
1077 {
1078 align (sizeof (void *) * 2);
1079 const char *ptr = read (len);
1080
1081 return ptr;
1082 }
1083
1084 /* strings as an size_t length, followed by the buffer. Make sure
1085 there's a NUL terminator on read. */
1086
1087 void
1088 bytes_out::str (const char *string, size_t len)
1089 {
1090 z (len);
1091 if (len)
1092 {
1093 gcc_checking_assert (!string[len]);
1094 buf (string, len + 1);
1095 }
1096 }
1097
1098 const char *
1099 bytes_in::str (size_t *len_p)
1100 {
1101 size_t len = z ();
1102
1103 /* We're about to trust some user data. */
1104 if (overrun)
1105 len = 0;
1106 if (len_p)
1107 *len_p = len;
1108 const char *str = NULL;
1109 if (len)
1110 {
1111 str = reinterpret_cast<const char *> (buf (len + 1));
1112 if (!str || str[len])
1113 {
1114 set_overrun ();
1115 str = NULL;
1116 }
1117 }
1118 return str ? str : "";
1119 }
1120
1121 cpp_hashnode *
1122 bytes_in::cpp_node ()
1123 {
1124 size_t len;
1125 const char *s = str (&len);
1126 if (!len)
1127 return NULL;
1128 return ::cpp_node (get_identifier_with_length (s, len));
1129 }
1130
1131 /* Format a string directly to the buffer, including a terminating
1132 NUL. Intended for human consumption. */
1133
1134 void
1135 bytes_out::printf (const char *format, ...)
1136 {
1137 va_list args;
1138 /* Exercise buffer expansion. */
1139 size_t len = EXPERIMENT (10, 500);
1140
1141 while (char *ptr = write (len))
1142 {
1143 va_start (args, format);
1144 size_t actual = vsnprintf (ptr, len, format, args) + 1;
1145 va_end (args);
1146 if (actual <= len)
1147 {
1148 unuse (len - actual);
1149 break;
1150 }
1151 unuse (len);
1152 len = actual;
1153 }
1154 }
1155
1156 void
1157 bytes_out::print_time (const char *kind, const tm *time, const char *tz)
1158 {
1159 printf ("%stime: %4u/%02u/%02u %02u:%02u:%02u %s",
1160 kind, time->tm_year + 1900, time->tm_mon + 1, time->tm_mday,
1161 time->tm_hour, time->tm_min, time->tm_sec, tz);
1162 }
1163
1164 /* Encapsulated Lazy Records Of Named Declarations.
1165 Header: Stunningly Elf32_Ehdr-like
1166 Sections: Sectional data
1167 [1-N) : User data sections
1168 N .strtab : strings, stunningly ELF STRTAB-like
1169 Index: Section table, stunningly ELF32_Shdr-like. */
1170
1171 class elf {
1172 protected:
1173 /* Constants used within the format. */
1174 enum private_constants {
1175 /* File kind. */
1176 ET_NONE = 0,
1177 EM_NONE = 0,
1178 OSABI_NONE = 0,
1179
1180 /* File format. */
1181 EV_CURRENT = 1,
1182 CLASS32 = 1,
1183 DATA2LSB = 1,
1184 DATA2MSB = 2,
1185
1186 /* Section numbering. */
1187 SHN_UNDEF = 0,
1188 SHN_LORESERVE = 0xff00,
1189 SHN_XINDEX = 0xffff,
1190
1191 /* Section types. */
1192 SHT_NONE = 0, /* No contents. */
1193 SHT_PROGBITS = 1, /* Random bytes. */
1194 SHT_STRTAB = 3, /* A string table. */
1195
1196 /* Section flags. */
1197 SHF_NONE = 0x00, /* Nothing. */
1198 SHF_STRINGS = 0x20, /* NUL-Terminated strings. */
1199
1200 /* I really hope we do not get CMI files larger than 4GB. */
1201 MY_CLASS = CLASS32,
1202 /* It is host endianness that is relevant. */
1203 MY_ENDIAN = DATA2LSB
1204 #ifdef WORDS_BIGENDIAN
1205 ^ DATA2LSB ^ DATA2MSB
1206 #endif
1207 };
1208
1209 public:
1210 /* Constants visible to users. */
1211 enum public_constants {
1212 /* Special error codes. Breaking layering a bit. */
1213 E_BAD_DATA = -1, /* Random unexpected data errors. */
1214 E_BAD_LAZY = -2, /* Badly ordered laziness. */
1215 E_BAD_IMPORT = -3 /* A nested import failed. */
1216 };
1217
1218 protected:
1219 /* File identification. On-disk representation. */
1220 struct ident {
1221 uint8_t magic[4]; /* 0x7f, 'E', 'L', 'F' */
1222 uint8_t klass; /* 4:CLASS32 */
1223 uint8_t data; /* 5:DATA2[LM]SB */
1224 uint8_t version; /* 6:EV_CURRENT */
1225 uint8_t osabi; /* 7:OSABI_NONE */
1226 uint8_t abiver; /* 8: 0 */
1227 uint8_t pad[7]; /* 9-15 */
1228 };
1229 /* File header. On-disk representation. */
1230 struct header {
1231 struct ident ident;
1232 uint16_t type; /* ET_NONE */
1233 uint16_t machine; /* EM_NONE */
1234 uint32_t version; /* EV_CURRENT */
1235 uint32_t entry; /* 0 */
1236 uint32_t phoff; /* 0 */
1237 uint32_t shoff; /* Section Header Offset in file */
1238 uint32_t flags;
1239 uint16_t ehsize; /* ELROND Header SIZE -- sizeof (header) */
1240 uint16_t phentsize; /* 0 */
1241 uint16_t phnum; /* 0 */
1242 uint16_t shentsize; /* Section Header SIZE -- sizeof (section) */
1243 uint16_t shnum; /* Section Header NUM */
1244 uint16_t shstrndx; /* Section Header STRing iNDeX */
1245 };
1246 /* File section. On-disk representation. */
1247 struct section {
1248 uint32_t name; /* String table offset. */
1249 uint32_t type; /* SHT_* */
1250 uint32_t flags; /* SHF_* */
1251 uint32_t addr; /* 0 */
1252 uint32_t offset; /* OFFSET in file */
1253 uint32_t size; /* SIZE of section */
1254 uint32_t link; /* 0 */
1255 uint32_t info; /* 0 */
1256 uint32_t addralign; /* 0 */
1257 uint32_t entsize; /* ENTry SIZE, usually 0 */
1258 };
1259
1260 protected:
1261 data hdr; /* The header. */
1262 data sectab; /* The section table. */
1263 data strtab; /* String table. */
1264 int fd; /* File descriptor we're reading or writing. */
1265 int err; /* Sticky error code. */
1266
1267 public:
1268 /* Construct from STREAM. E is errno if STREAM NULL. */
1269 elf (int fd, int e)
1270 :hdr (), sectab (), strtab (), fd (fd), err (fd >= 0 ? 0 : e)
1271 {}
1272 ~elf ()
1273 {
1274 gcc_checking_assert (fd < 0 && !hdr.buffer
1275 && !sectab.buffer && !strtab.buffer);
1276 }
1277
1278 public:
1279 /* Return the error, if we have an error. */
1280 int get_error () const
1281 {
1282 return err;
1283 }
1284 /* Set the error, unless it's already been set. */
1285 void set_error (int e = E_BAD_DATA)
1286 {
1287 if (!err)
1288 err = e;
1289 }
1290 /* Get an error string. */
1291 const char *get_error (const char *) const;
1292
1293 public:
1294 /* Begin reading/writing file. Return false on error. */
1295 bool begin () const
1296 {
1297 return !get_error ();
1298 }
1299 /* Finish reading/writing file. Return false on error. */
1300 bool end ();
1301 };
1302
1303 /* Return error string. */
1304
1305 const char *
1306 elf::get_error (const char *name) const
1307 {
1308 if (!name)
1309 return "Unknown CMI mapping";
1310
1311 switch (err)
1312 {
1313 case 0:
1314 gcc_unreachable ();
1315 case E_BAD_DATA:
1316 return "Bad file data";
1317 case E_BAD_IMPORT:
1318 return "Bad import dependency";
1319 case E_BAD_LAZY:
1320 return "Bad lazy ordering";
1321 default:
1322 return xstrerror (err);
1323 }
1324 }
1325
1326 /* Finish file, return true if there's an error. */
1327
1328 bool
1329 elf::end ()
1330 {
1331 /* Close the stream and free the section table. */
1332 if (fd >= 0 && close (fd))
1333 set_error (errno);
1334 fd = -1;
1335
1336 return !get_error ();
1337 }
1338
1339 /* ELROND reader. */
1340
1341 class elf_in : public elf {
1342 typedef elf parent;
1343
1344 private:
1345 /* For freezing & defrosting. */
1346 #if !defined (HOST_LACKS_INODE_NUMBERS)
1347 dev_t device;
1348 ino_t inode;
1349 #endif
1350
1351 public:
1352 elf_in (int fd, int e)
1353 :parent (fd, e)
1354 {
1355 }
1356 ~elf_in ()
1357 {
1358 }
1359
1360 public:
1361 bool is_frozen () const
1362 {
1363 return fd < 0 && hdr.pos;
1364 }
1365 bool is_freezable () const
1366 {
1367 return fd >= 0 && hdr.pos;
1368 }
1369 void freeze ();
1370 bool defrost (const char *);
1371
1372 /* If BYTES is in the mmapped area, allocate a new buffer for it. */
1373 void preserve (bytes_in &bytes ATTRIBUTE_UNUSED)
1374 {
1375 #if MAPPED_READING
1376 if (hdr.buffer && bytes.buffer >= hdr.buffer
1377 && bytes.buffer < hdr.buffer + hdr.pos)
1378 {
1379 char *buf = bytes.buffer;
1380 bytes.buffer = data::simple_memory.grow (NULL, bytes.size);
1381 memcpy (bytes.buffer, buf, bytes.size);
1382 }
1383 #endif
1384 }
1385 /* If BYTES is not in SELF's mmapped area, free it. SELF might be
1386 NULL. */
1387 static void release (elf_in *self ATTRIBUTE_UNUSED, bytes_in &bytes)
1388 {
1389 #if MAPPED_READING
1390 if (!(self && self->hdr.buffer && bytes.buffer >= self->hdr.buffer
1391 && bytes.buffer < self->hdr.buffer + self->hdr.pos))
1392 #endif
1393 data::simple_memory.shrink (bytes.buffer);
1394 bytes.buffer = NULL;
1395 bytes.size = 0;
1396 }
1397
1398 public:
1399 static void grow (data &data, unsigned needed)
1400 {
1401 gcc_checking_assert (!data.buffer);
1402 #if !MAPPED_READING
1403 data.buffer = XNEWVEC (char, needed);
1404 #endif
1405 data.size = needed;
1406 }
1407 static void shrink (data &data)
1408 {
1409 #if !MAPPED_READING
1410 XDELETEVEC (data.buffer);
1411 #endif
1412 data.buffer = NULL;
1413 data.size = 0;
1414 }
1415
1416 public:
1417 const section *get_section (unsigned s) const
1418 {
1419 if (s * sizeof (section) < sectab.size)
1420 return reinterpret_cast<const section *>
1421 (&sectab.buffer[s * sizeof (section)]);
1422 else
1423 return NULL;
1424 }
1425 unsigned get_section_limit () const
1426 {
1427 return sectab.size / sizeof (section);
1428 }
1429
1430 protected:
1431 const char *read (data *, unsigned, unsigned);
1432
1433 public:
1434 /* Read section by number. */
1435 bool read (data *d, const section *s)
1436 {
1437 return s && read (d, s->offset, s->size);
1438 }
1439
1440 /* Find section by name. */
1441 unsigned find (const char *name);
1442 /* Find section by index. */
1443 const section *find (unsigned snum, unsigned type = SHT_PROGBITS);
1444
1445 public:
1446 /* Release the string table, when we're done with it. */
1447 void release ()
1448 {
1449 shrink (strtab);
1450 }
1451
1452 public:
1453 bool begin (location_t);
1454 bool end ()
1455 {
1456 release ();
1457 #if MAPPED_READING
1458 if (hdr.buffer)
1459 munmap (hdr.buffer, hdr.pos);
1460 hdr.buffer = NULL;
1461 #endif
1462 shrink (sectab);
1463
1464 return parent::end ();
1465 }
1466
1467 public:
1468 /* Return string name at OFFSET. Checks OFFSET range. Always
1469 returns non-NULL. We know offset 0 is an empty string. */
1470 const char *name (unsigned offset)
1471 {
1472 return &strtab.buffer[offset < strtab.size ? offset : 0];
1473 }
1474 };
1475
1476 /* ELROND writer. */
1477
1478 class elf_out : public elf, public data::allocator {
1479 typedef elf parent;
1480 /* Desired section alignment on disk. */
1481 static const int SECTION_ALIGN = 16;
1482
1483 private:
1484 ptr_int_hash_map identtab; /* Map of IDENTIFIERS to strtab offsets. */
1485 unsigned pos; /* Write position in file. */
1486 #if MAPPED_WRITING
1487 unsigned offset; /* Offset of the mapping. */
1488 unsigned extent; /* Length of mapping. */
1489 unsigned page_size; /* System page size. */
1490 #endif
1491
1492 public:
1493 elf_out (int fd, int e)
1494 :parent (fd, e), identtab (500), pos (0)
1495 {
1496 #if MAPPED_WRITING
1497 offset = extent = 0;
1498 page_size = sysconf (_SC_PAGE_SIZE);
1499 if (page_size < SECTION_ALIGN)
1500 /* Something really strange. */
1501 set_error (EINVAL);
1502 #endif
1503 }
1504 ~elf_out ()
1505 {
1506 data::simple_memory.shrink (hdr);
1507 data::simple_memory.shrink (sectab);
1508 data::simple_memory.shrink (strtab);
1509 }
1510
1511 #if MAPPED_WRITING
1512 private:
1513 void create_mapping (unsigned ext, bool extending = true);
1514 void remove_mapping ();
1515 #endif
1516
1517 protected:
1518 using allocator::grow;
1519 char *grow (char *, unsigned needed) final override;
1520 #if MAPPED_WRITING
1521 using allocator::shrink;
1522 void shrink (char *) final override;
1523 #endif
1524
1525 public:
1526 unsigned get_section_limit () const
1527 {
1528 return sectab.pos / sizeof (section);
1529 }
1530
1531 protected:
1532 unsigned add (unsigned type, unsigned name = 0,
1533 unsigned off = 0, unsigned size = 0, unsigned flags = SHF_NONE);
1534 unsigned write (const data &);
1535 #if MAPPED_WRITING
1536 unsigned write (const bytes_out &);
1537 #endif
1538
1539 public:
1540 /* IDENTIFIER to strtab offset. */
1541 unsigned name (tree ident);
1542 /* String literal to strtab offset. */
1543 unsigned name (const char *n);
1544 /* Qualified name of DECL to strtab offset. */
1545 unsigned qualified_name (tree decl, bool is_defn);
1546
1547 private:
1548 unsigned strtab_write (const char *s, unsigned l);
1549 void strtab_write (tree decl, int);
1550
1551 public:
1552 /* Add a section with contents or strings. */
1553 unsigned add (const bytes_out &, bool string_p, unsigned name);
1554
1555 public:
1556 /* Begin and end writing. */
1557 bool begin ();
1558 bool end ();
1559 };
1560
1561 /* Begin reading section NAME (of type PROGBITS) from SOURCE.
1562 Data always checked for CRC. */
1563
1564 bool
1565 bytes_in::begin (location_t loc, elf_in *source, const char *name)
1566 {
1567 unsigned snum = source->find (name);
1568
1569 return begin (loc, source, snum, name);
1570 }
1571
1572 /* Begin reading section numbered SNUM with NAME (may be NULL). */
1573
1574 bool
1575 bytes_in::begin (location_t loc, elf_in *source, unsigned snum, const char *name)
1576 {
1577 if (!source->read (this, source->find (snum))
1578 || !size || !check_crc ())
1579 {
1580 source->set_error (elf::E_BAD_DATA);
1581 source->shrink (*this);
1582 if (name)
1583 error_at (loc, "section %qs is missing or corrupted", name);
1584 else
1585 error_at (loc, "section #%u is missing or corrupted", snum);
1586 return false;
1587 }
1588 pos = 4;
1589 return true;
1590 }
1591
1592 /* Finish reading a section. */
1593
1594 bool
1595 bytes_in::end (elf_in *src)
1596 {
1597 if (more_p ())
1598 set_overrun ();
1599 if (overrun)
1600 src->set_error ();
1601
1602 src->shrink (*this);
1603
1604 return !overrun;
1605 }
1606
1607 /* Begin writing buffer. */
1608
1609 void
1610 bytes_out::begin (bool need_crc)
1611 {
1612 if (need_crc)
1613 pos = 4;
1614 memory->grow (*this, 0, false);
1615 }
1616
1617 /* Finish writing buffer. Stream out to SINK as named section NAME.
1618 Return section number or 0 on failure. If CRC_PTR is true, crc
1619 the data. Otherwise it is a string section. */
1620
1621 unsigned
1622 bytes_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
1623 {
1624 lengths[3] += pos;
1625 spans[3]++;
1626
1627 set_crc (crc_ptr);
1628 unsigned sec_num = sink->add (*this, !crc_ptr, name);
1629 memory->shrink (*this);
1630
1631 return sec_num;
1632 }
1633
1634 /* Close and open the file, without destroying it. */
1635
1636 void
1637 elf_in::freeze ()
1638 {
1639 gcc_checking_assert (!is_frozen ());
1640 #if MAPPED_READING
1641 if (munmap (hdr.buffer, hdr.pos) < 0)
1642 set_error (errno);
1643 #endif
1644 if (close (fd) < 0)
1645 set_error (errno);
1646 fd = -1;
1647 }
1648
1649 bool
1650 elf_in::defrost (const char *name)
1651 {
1652 gcc_checking_assert (is_frozen ());
1653 struct stat stat;
1654
1655 fd = open (name, O_RDONLY | O_CLOEXEC | O_BINARY);
1656 if (fd < 0 || fstat (fd, &stat) < 0)
1657 set_error (errno);
1658 else
1659 {
1660 bool ok = hdr.pos == unsigned (stat.st_size);
1661 #ifndef HOST_LACKS_INODE_NUMBERS
1662 if (device != stat.st_dev
1663 || inode != stat.st_ino)
1664 ok = false;
1665 #endif
1666 if (!ok)
1667 set_error (EMFILE);
1668 #if MAPPED_READING
1669 if (ok)
1670 {
1671 char *mapping = reinterpret_cast<char *>
1672 (mmap (NULL, hdr.pos, PROT_READ, MAP_SHARED, fd, 0));
1673 if (mapping == MAP_FAILED)
1674 fail:
1675 set_error (errno);
1676 else
1677 {
1678 if (madvise (mapping, hdr.pos, MADV_RANDOM))
1679 goto fail;
1680
1681 /* These buffers are never NULL in this case. */
1682 strtab.buffer = mapping + strtab.pos;
1683 sectab.buffer = mapping + sectab.pos;
1684 hdr.buffer = mapping;
1685 }
1686 }
1687 #endif
1688 }
1689
1690 return !get_error ();
1691 }
1692
1693 /* Read at current position into BUFFER. Return true on success. */
1694
1695 const char *
1696 elf_in::read (data *data, unsigned pos, unsigned length)
1697 {
1698 #if MAPPED_READING
1699 if (pos + length > hdr.pos)
1700 {
1701 set_error (EINVAL);
1702 return NULL;
1703 }
1704 #else
1705 if (pos != ~0u && lseek (fd, pos, SEEK_SET) < 0)
1706 {
1707 set_error (errno);
1708 return NULL;
1709 }
1710 #endif
1711 grow (*data, length);
1712 #if MAPPED_READING
1713 data->buffer = hdr.buffer + pos;
1714 #else
1715 if (::read (fd, data->buffer, data->size) != ssize_t (length))
1716 {
1717 set_error (errno);
1718 shrink (*data);
1719 return NULL;
1720 }
1721 #endif
1722
1723 return data->buffer;
1724 }
1725
1726 /* Read section SNUM of TYPE. Return section pointer or NULL on error. */
1727
1728 const elf::section *
1729 elf_in::find (unsigned snum, unsigned type)
1730 {
1731 const section *sec = get_section (snum);
1732 if (!snum || !sec || sec->type != type)
1733 return NULL;
1734 return sec;
1735 }
1736
1737 /* Find a section NAME and TYPE. Return section number, or zero on
1738 failure. */
1739
1740 unsigned
1741 elf_in::find (const char *sname)
1742 {
1743 for (unsigned pos = sectab.size; pos -= sizeof (section); )
1744 {
1745 const section *sec
1746 = reinterpret_cast<const section *> (&sectab.buffer[pos]);
1747
1748 if (0 == strcmp (sname, name (sec->name)))
1749 return pos / sizeof (section);
1750 }
1751
1752 return 0;
1753 }
1754
1755 /* Begin reading file. Verify header. Pull in section and string
1756 tables. Return true on success. */
1757
1758 bool
1759 elf_in::begin (location_t loc)
1760 {
1761 if (!parent::begin ())
1762 return false;
1763
1764 struct stat stat;
1765 unsigned size = 0;
1766 if (!fstat (fd, &stat))
1767 {
1768 #if !defined (HOST_LACKS_INODE_NUMBERS)
1769 device = stat.st_dev;
1770 inode = stat.st_ino;
1771 #endif
1772 /* Never generate files > 4GB, check we've not been given one. */
1773 if (stat.st_size == unsigned (stat.st_size))
1774 size = unsigned (stat.st_size);
1775 }
1776
1777 #if MAPPED_READING
1778 /* MAP_SHARED so that the file is backing store. If someone else
1779 concurrently writes it, they're wrong. */
1780 void *mapping = mmap (NULL, size, PROT_READ, MAP_SHARED, fd, 0);
1781 if (mapping == MAP_FAILED)
1782 {
1783 fail:
1784 set_error (errno);
1785 return false;
1786 }
1787 /* We'll be hopping over this randomly. Some systems declare the
1788 first parm as char *, and other declare it as void *. */
1789 if (madvise (reinterpret_cast <char *> (mapping), size, MADV_RANDOM))
1790 goto fail;
1791
1792 hdr.buffer = (char *)mapping;
1793 #else
1794 read (&hdr, 0, sizeof (header));
1795 #endif
1796 hdr.pos = size; /* Record size of the file. */
1797
1798 const header *h = reinterpret_cast<const header *> (hdr.buffer);
1799 if (!h)
1800 return false;
1801
1802 if (h->ident.magic[0] != 0x7f
1803 || h->ident.magic[1] != 'E'
1804 || h->ident.magic[2] != 'L'
1805 || h->ident.magic[3] != 'F')
1806 {
1807 error_at (loc, "not Encapsulated Lazy Records of Named Declarations");
1808 failed:
1809 shrink (hdr);
1810 return false;
1811 }
1812
1813 /* We expect a particular format -- the ELF is not intended to be
1814 distributable. */
1815 if (h->ident.klass != MY_CLASS
1816 || h->ident.data != MY_ENDIAN
1817 || h->ident.version != EV_CURRENT
1818 || h->type != ET_NONE
1819 || h->machine != EM_NONE
1820 || h->ident.osabi != OSABI_NONE)
1821 {
1822 error_at (loc, "unexpected encapsulation format or type");
1823 goto failed;
1824 }
1825
1826 int e = -1;
1827 if (!h->shoff || h->shentsize != sizeof (section))
1828 {
1829 malformed:
1830 set_error (e);
1831 error_at (loc, "encapsulation is malformed");
1832 goto failed;
1833 }
1834
1835 unsigned strndx = h->shstrndx;
1836 unsigned shnum = h->shnum;
1837 if (shnum == SHN_XINDEX)
1838 {
1839 if (!read (&sectab, h->shoff, sizeof (section)))
1840 {
1841 section_table_fail:
1842 e = errno;
1843 goto malformed;
1844 }
1845 shnum = get_section (0)->size;
1846 /* Freeing does mean we'll re-read it in the case we're not
1847 mapping, but this is going to be rare. */
1848 shrink (sectab);
1849 }
1850
1851 if (!shnum)
1852 goto malformed;
1853
1854 if (!read (&sectab, h->shoff, shnum * sizeof (section)))
1855 goto section_table_fail;
1856
1857 if (strndx == SHN_XINDEX)
1858 strndx = get_section (0)->link;
1859
1860 if (!read (&strtab, find (strndx, SHT_STRTAB)))
1861 goto malformed;
1862
1863 /* The string table should be at least one byte, with NUL chars
1864 at either end. */
1865 if (!(strtab.size && !strtab.buffer[0]
1866 && !strtab.buffer[strtab.size - 1]))
1867 goto malformed;
1868
1869 #if MAPPED_READING
1870 /* Record the offsets of the section and string tables. */
1871 sectab.pos = h->shoff;
1872 strtab.pos = shnum * sizeof (section);
1873 #else
1874 shrink (hdr);
1875 #endif
1876
1877 return true;
1878 }
1879
1880 /* Create a new mapping. */
1881
1882 #if MAPPED_WRITING
1883 void
1884 elf_out::create_mapping (unsigned ext, bool extending)
1885 {
1886 #ifndef HAVE_POSIX_FALLOCATE
1887 #define posix_fallocate(fd,off,len) ftruncate (fd, off + len)
1888 #endif
1889 void *mapping = MAP_FAILED;
1890 if (extending && ext < 1024 * 1024)
1891 {
1892 if (!posix_fallocate (fd, offset, ext * 2))
1893 mapping = mmap (NULL, ext * 2, PROT_READ | PROT_WRITE,
1894 MAP_SHARED, fd, offset);
1895 if (mapping != MAP_FAILED)
1896 ext *= 2;
1897 }
1898 if (mapping == MAP_FAILED)
1899 {
1900 if (!extending || !posix_fallocate (fd, offset, ext))
1901 mapping = mmap (NULL, ext, PROT_READ | PROT_WRITE,
1902 MAP_SHARED, fd, offset);
1903 if (mapping == MAP_FAILED)
1904 {
1905 set_error (errno);
1906 mapping = NULL;
1907 ext = 0;
1908 }
1909 }
1910 #undef posix_fallocate
1911 hdr.buffer = (char *)mapping;
1912 extent = ext;
1913 }
1914 #endif
1915
1916 /* Flush out the current mapping. */
1917
1918 #if MAPPED_WRITING
1919 void
1920 elf_out::remove_mapping ()
1921 {
1922 if (hdr.buffer)
1923 {
1924 /* MS_ASYNC dtrt with the removed mapping, including a
1925 subsequent overlapping remap. */
1926 if (msync (hdr.buffer, extent, MS_ASYNC)
1927 || munmap (hdr.buffer, extent))
1928 /* We're somewhat screwed at this point. */
1929 set_error (errno);
1930 }
1931
1932 hdr.buffer = NULL;
1933 }
1934 #endif
1935
1936 /* Grow a mapping of PTR to be NEEDED bytes long. This gets
1937 interesting if the new size grows the EXTENT. */
1938
1939 char *
1940 elf_out::grow (char *data, unsigned needed)
1941 {
1942 if (!data)
1943 {
1944 /* First allocation, check we're aligned. */
1945 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
1946 #if MAPPED_WRITING
1947 data = hdr.buffer + (pos - offset);
1948 #endif
1949 }
1950
1951 #if MAPPED_WRITING
1952 unsigned off = data - hdr.buffer;
1953 if (off + needed > extent)
1954 {
1955 /* We need to grow the mapping. */
1956 unsigned lwm = off & ~(page_size - 1);
1957 unsigned hwm = (off + needed + page_size - 1) & ~(page_size - 1);
1958
1959 gcc_checking_assert (hwm > extent);
1960
1961 remove_mapping ();
1962
1963 offset += lwm;
1964 create_mapping (extent < hwm - lwm ? hwm - lwm : extent);
1965
1966 data = hdr.buffer + (off - lwm);
1967 }
1968 #else
1969 data = allocator::grow (data, needed);
1970 #endif
1971
1972 return data;
1973 }
1974
1975 #if MAPPED_WRITING
1976 /* Shrinking is a NOP. */
1977 void
1978 elf_out::shrink (char *)
1979 {
1980 }
1981 #endif
1982
1983 /* Write S of length L to the strtab buffer. L must include the ending
1984 NUL, if that's what you want. */
1985
1986 unsigned
1987 elf_out::strtab_write (const char *s, unsigned l)
1988 {
1989 if (strtab.pos + l > strtab.size)
1990 data::simple_memory.grow (strtab, strtab.pos + l, false);
1991 memcpy (strtab.buffer + strtab.pos, s, l);
1992 unsigned res = strtab.pos;
1993 strtab.pos += l;
1994 return res;
1995 }
1996
1997 /* Write qualified name of decl. INNER >0 if this is a definition, <0
1998 if this is a qualifier of an outer name. */
1999
2000 void
2001 elf_out::strtab_write (tree decl, int inner)
2002 {
2003 tree ctx = CP_DECL_CONTEXT (decl);
2004 if (TYPE_P (ctx))
2005 ctx = TYPE_NAME (ctx);
2006 if (ctx != global_namespace)
2007 strtab_write (ctx, -1);
2008
2009 tree name = DECL_NAME (decl);
2010 if (!name)
2011 name = DECL_ASSEMBLER_NAME_RAW (decl);
2012 strtab_write (IDENTIFIER_POINTER (name), IDENTIFIER_LENGTH (name));
2013
2014 if (inner)
2015 strtab_write (&"::{}"[inner+1], 2);
2016 }
2017
2018 /* Map IDENTIFIER IDENT to strtab offset. Inserts into strtab if not
2019 already there. */
2020
2021 unsigned
2022 elf_out::name (tree ident)
2023 {
2024 unsigned res = 0;
2025 if (ident)
2026 {
2027 bool existed;
2028 int *slot = &identtab.get_or_insert (ident, &existed);
2029 if (!existed)
2030 *slot = strtab_write (IDENTIFIER_POINTER (ident),
2031 IDENTIFIER_LENGTH (ident) + 1);
2032 res = *slot;
2033 }
2034 return res;
2035 }
2036
2037 /* Map LITERAL to strtab offset. Does not detect duplicates and
2038 expects LITERAL to remain live until strtab is written out. */
2039
2040 unsigned
2041 elf_out::name (const char *literal)
2042 {
2043 return strtab_write (literal, strlen (literal) + 1);
2044 }
2045
2046 /* Map a DECL's qualified name to strtab offset. Does not detect
2047 duplicates. */
2048
2049 unsigned
2050 elf_out::qualified_name (tree decl, bool is_defn)
2051 {
2052 gcc_checking_assert (DECL_P (decl) && decl != global_namespace);
2053 unsigned result = strtab.pos;
2054
2055 strtab_write (decl, is_defn);
2056 strtab_write ("", 1);
2057
2058 return result;
2059 }
2060
2061 /* Add section to file. Return section number. TYPE & NAME identify
2062 the section. OFF and SIZE identify the file location of its
2063 data. FLAGS contains additional info. */
2064
2065 unsigned
2066 elf_out::add (unsigned type, unsigned name, unsigned off, unsigned size,
2067 unsigned flags)
2068 {
2069 gcc_checking_assert (!(off & (SECTION_ALIGN - 1)));
2070 if (sectab.pos + sizeof (section) > sectab.size)
2071 data::simple_memory.grow (sectab, sectab.pos + sizeof (section), false);
2072 section *sec = reinterpret_cast<section *> (sectab.buffer + sectab.pos);
2073 memset (sec, 0, sizeof (section));
2074 sec->type = type;
2075 sec->flags = flags;
2076 sec->name = name;
2077 sec->offset = off;
2078 sec->size = size;
2079 if (flags & SHF_STRINGS)
2080 sec->entsize = 1;
2081
2082 unsigned res = sectab.pos;
2083 sectab.pos += sizeof (section);
2084 return res / sizeof (section);
2085 }
2086
2087 /* Pad to the next alignment boundary, then write BUFFER to disk.
2088 Return the position of the start of the write, or zero on failure. */
2089
2090 unsigned
2091 elf_out::write (const data &buffer)
2092 {
2093 #if MAPPED_WRITING
2094 /* HDR is always mapped. */
2095 if (&buffer != &hdr)
2096 {
2097 bytes_out out (this);
2098 grow (out, buffer.pos, true);
2099 if (out.buffer)
2100 memcpy (out.buffer, buffer.buffer, buffer.pos);
2101 shrink (out);
2102 }
2103 else
2104 /* We should have been aligned during the first allocation. */
2105 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
2106 #else
2107 if (::write (fd, buffer.buffer, buffer.pos) != ssize_t (buffer.pos))
2108 {
2109 set_error (errno);
2110 return 0;
2111 }
2112 #endif
2113 unsigned res = pos;
2114 pos += buffer.pos;
2115
2116 if (unsigned padding = -pos & (SECTION_ALIGN - 1))
2117 {
2118 #if !MAPPED_WRITING
2119 /* Align the section on disk, should help the necessary copies.
2120 fseeking to extend is non-portable. */
2121 static char zero[SECTION_ALIGN];
2122 if (::write (fd, &zero, padding) != ssize_t (padding))
2123 set_error (errno);
2124 #endif
2125 pos += padding;
2126 }
2127 return res;
2128 }
2129
2130 /* Write a streaming buffer. It must be using us as an allocator. */
2131
2132 #if MAPPED_WRITING
2133 unsigned
2134 elf_out::write (const bytes_out &buf)
2135 {
2136 gcc_checking_assert (buf.memory == this);
2137 /* A directly mapped buffer. */
2138 gcc_checking_assert (buf.buffer - hdr.buffer >= 0
2139 && buf.buffer - hdr.buffer + buf.size <= extent);
2140 unsigned res = pos;
2141 pos += buf.pos;
2142
2143 /* Align up. We're not going to advance into the next page. */
2144 pos += -pos & (SECTION_ALIGN - 1);
2145
2146 return res;
2147 }
2148 #endif
2149
2150 /* Write data and add section. STRING_P is true for a string
2151 section, false for PROGBITS. NAME identifies the section (0 is the
2152 empty name). DATA is the contents. Return section number or 0 on
2153 failure (0 is the undef section). */
2154
2155 unsigned
2156 elf_out::add (const bytes_out &data, bool string_p, unsigned name)
2157 {
2158 unsigned off = write (data);
2159
2160 return add (string_p ? SHT_STRTAB : SHT_PROGBITS, name,
2161 off, data.pos, string_p ? SHF_STRINGS : SHF_NONE);
2162 }
2163
2164 /* Begin writing the file. Initialize the section table and write an
2165 empty header. Return false on failure. */
2166
2167 bool
2168 elf_out::begin ()
2169 {
2170 if (!parent::begin ())
2171 return false;
2172
2173 /* Let the allocators pick a default. */
2174 data::simple_memory.grow (strtab, 0, false);
2175 data::simple_memory.grow (sectab, 0, false);
2176
2177 /* The string table starts with an empty string. */
2178 name ("");
2179
2180 /* Create the UNDEF section. */
2181 add (SHT_NONE);
2182
2183 #if MAPPED_WRITING
2184 /* Start a mapping. */
2185 create_mapping (EXPERIMENT (page_size,
2186 (32767 + page_size) & ~(page_size - 1)));
2187 if (!hdr.buffer)
2188 return false;
2189 #endif
2190
2191 /* Write an empty header. */
2192 grow (hdr, sizeof (header), true);
2193 header *h = reinterpret_cast<header *> (hdr.buffer);
2194 memset (h, 0, sizeof (header));
2195 hdr.pos = hdr.size;
2196 write (hdr);
2197 return !get_error ();
2198 }
2199
2200 /* Finish writing the file. Write out the string & section tables.
2201 Fill in the header. Return true on error. */
2202
2203 bool
2204 elf_out::end ()
2205 {
2206 if (fd >= 0)
2207 {
2208 /* Write the string table. */
2209 unsigned strnam = name (".strtab");
2210 unsigned stroff = write (strtab);
2211 unsigned strndx = add (SHT_STRTAB, strnam, stroff, strtab.pos,
2212 SHF_STRINGS);
2213
2214 /* Store escape values in section[0]. */
2215 if (strndx >= SHN_LORESERVE)
2216 {
2217 reinterpret_cast<section *> (sectab.buffer)->link = strndx;
2218 strndx = SHN_XINDEX;
2219 }
2220 unsigned shnum = sectab.pos / sizeof (section);
2221 if (shnum >= SHN_LORESERVE)
2222 {
2223 reinterpret_cast<section *> (sectab.buffer)->size = shnum;
2224 shnum = SHN_XINDEX;
2225 }
2226
2227 unsigned shoff = write (sectab);
2228
2229 #if MAPPED_WRITING
2230 if (offset)
2231 {
2232 remove_mapping ();
2233 offset = 0;
2234 create_mapping ((sizeof (header) + page_size - 1) & ~(page_size - 1),
2235 false);
2236 }
2237 unsigned length = pos;
2238 #else
2239 if (lseek (fd, 0, SEEK_SET) < 0)
2240 set_error (errno);
2241 #endif
2242 /* Write header. */
2243 if (!get_error ())
2244 {
2245 /* Write the correct header now. */
2246 header *h = reinterpret_cast<header *> (hdr.buffer);
2247 h->ident.magic[0] = 0x7f;
2248 h->ident.magic[1] = 'E'; /* Elrond */
2249 h->ident.magic[2] = 'L'; /* is an */
2250 h->ident.magic[3] = 'F'; /* elf. */
2251 h->ident.klass = MY_CLASS;
2252 h->ident.data = MY_ENDIAN;
2253 h->ident.version = EV_CURRENT;
2254 h->ident.osabi = OSABI_NONE;
2255 h->type = ET_NONE;
2256 h->machine = EM_NONE;
2257 h->version = EV_CURRENT;
2258 h->shoff = shoff;
2259 h->ehsize = sizeof (header);
2260 h->shentsize = sizeof (section);
2261 h->shnum = shnum;
2262 h->shstrndx = strndx;
2263
2264 pos = 0;
2265 write (hdr);
2266 }
2267
2268 #if MAPPED_WRITING
2269 remove_mapping ();
2270 if (ftruncate (fd, length))
2271 set_error (errno);
2272 #endif
2273 }
2274
2275 data::simple_memory.shrink (sectab);
2276 data::simple_memory.shrink (strtab);
2277
2278 return parent::end ();
2279 }
2280
2281 /********************************************************************/
2282
2283 /* A dependency set. This is used during stream out to determine the
2284 connectivity of the graph. Every namespace-scope declaration that
2285 needs writing has a depset. The depset is filled with the (depsets
2286 of) declarations within this module that it references. For a
2287 declaration that'll generally be named types. For definitions
2288 it'll also be declarations in the body.
2289
2290 From that we can convert the graph to a DAG, via determining the
2291 Strongly Connected Clusters. Each cluster is streamed
2292 independently, and thus we achieve lazy loading.
2293
2294 Other decls that get a depset are namespaces themselves and
2295 unnameable declarations. */
2296
2297 class depset {
2298 private:
2299 tree entity; /* Entity, or containing namespace. */
2300 uintptr_t discriminator; /* Flags or identifier. */
2301
2302 public:
2303 /* The kinds of entity the depset could describe. The ordering is
2304 significant, see entity_kind_name. */
2305 enum entity_kind
2306 {
2307 EK_DECL, /* A decl. */
2308 EK_SPECIALIZATION, /* A specialization. */
2309 EK_PARTIAL, /* A partial specialization. */
2310 EK_USING, /* A using declaration (at namespace scope). */
2311 EK_NAMESPACE, /* A namespace. */
2312 EK_REDIRECT, /* Redirect to a template_decl. */
2313 EK_EXPLICIT_HWM,
2314 EK_BINDING = EK_EXPLICIT_HWM, /* Implicitly encoded. */
2315 EK_FOR_BINDING, /* A decl being inserted for a binding. */
2316 EK_INNER_DECL, /* A decl defined outside of its imported
2317 context. */
2318 EK_DIRECT_HWM = EK_PARTIAL + 1,
2319
2320 EK_BITS = 3 /* Only need to encode below EK_EXPLICIT_HWM. */
2321 };
2322
2323 private:
2324 /* Placement of bit fields in discriminator. */
2325 enum disc_bits
2326 {
2327 DB_ZERO_BIT, /* Set to disambiguate identifier from flags */
2328 DB_SPECIAL_BIT, /* First dep slot is special. */
2329 DB_KIND_BIT, /* Kind of the entity. */
2330 DB_KIND_BITS = EK_BITS,
2331 DB_DEFN_BIT = DB_KIND_BIT + DB_KIND_BITS,
2332 DB_IS_MEMBER_BIT, /* Is an out-of-class member. */
2333 DB_IS_INTERNAL_BIT, /* It is an (erroneous)
2334 internal-linkage entity. */
2335 DB_REFS_INTERNAL_BIT, /* Refers to an internal-linkage
2336 entity. */
2337 DB_IMPORTED_BIT, /* An imported entity. */
2338 DB_UNREACHED_BIT, /* A yet-to-be reached entity. */
2339 DB_HIDDEN_BIT, /* A hidden binding. */
2340 /* The following bits are not independent, but enumerating them is
2341 awkward. */
2342 DB_TYPE_SPEC_BIT, /* Specialization in the type table. */
2343 DB_FRIEND_SPEC_BIT, /* An instantiated template friend. */
2344 };
2345
2346 public:
2347 /* The first slot is special for EK_SPECIALIZATIONS it is a
2348 spec_entry pointer. It is not relevant for the SCC
2349 determination. */
2350 vec<depset *> deps; /* Depsets we reference. */
2351
2352 public:
2353 unsigned cluster; /* Strongly connected cluster, later entity number */
2354 unsigned section; /* Section written to. */
2355 /* During SCC construction, section is lowlink, until the depset is
2356 removed from the stack. See Tarjan algorithm for details. */
2357
2358 private:
2359 /* Construction via factories. Destruction via hash traits. */
2360 depset (tree entity);
2361 ~depset ();
2362
2363 public:
2364 static depset *make_binding (tree, tree);
2365 static depset *make_entity (tree, entity_kind, bool = false);
2366 /* Late setting a binding name -- /then/ insert into hash! */
2367 inline void set_binding_name (tree name)
2368 {
2369 gcc_checking_assert (!get_name ());
2370 discriminator = reinterpret_cast<uintptr_t> (name);
2371 }
2372
2373 private:
2374 template<unsigned I> void set_flag_bit ()
2375 {
2376 gcc_checking_assert (I < 2 || !is_binding ());
2377 discriminator |= 1u << I;
2378 }
2379 template<unsigned I> void clear_flag_bit ()
2380 {
2381 gcc_checking_assert (I < 2 || !is_binding ());
2382 discriminator &= ~(1u << I);
2383 }
2384 template<unsigned I> bool get_flag_bit () const
2385 {
2386 gcc_checking_assert (I < 2 || !is_binding ());
2387 return bool ((discriminator >> I) & 1);
2388 }
2389
2390 public:
2391 bool is_binding () const
2392 {
2393 return !get_flag_bit<DB_ZERO_BIT> ();
2394 }
2395 entity_kind get_entity_kind () const
2396 {
2397 if (is_binding ())
2398 return EK_BINDING;
2399 return entity_kind ((discriminator >> DB_KIND_BIT) & ((1u << EK_BITS) - 1));
2400 }
2401 const char *entity_kind_name () const;
2402
2403 public:
2404 bool has_defn () const
2405 {
2406 return get_flag_bit<DB_DEFN_BIT> ();
2407 }
2408
2409 public:
2410 /* This class-member is defined here, but the class was imported. */
2411 bool is_member () const
2412 {
2413 gcc_checking_assert (get_entity_kind () == EK_DECL);
2414 return get_flag_bit<DB_IS_MEMBER_BIT> ();
2415 }
2416 public:
2417 bool is_internal () const
2418 {
2419 return get_flag_bit<DB_IS_INTERNAL_BIT> ();
2420 }
2421 bool refs_internal () const
2422 {
2423 return get_flag_bit<DB_REFS_INTERNAL_BIT> ();
2424 }
2425 bool is_import () const
2426 {
2427 return get_flag_bit<DB_IMPORTED_BIT> ();
2428 }
2429 bool is_unreached () const
2430 {
2431 return get_flag_bit<DB_UNREACHED_BIT> ();
2432 }
2433 bool is_hidden () const
2434 {
2435 return get_flag_bit<DB_HIDDEN_BIT> ();
2436 }
2437 bool is_type_spec () const
2438 {
2439 return get_flag_bit<DB_TYPE_SPEC_BIT> ();
2440 }
2441 bool is_friend_spec () const
2442 {
2443 return get_flag_bit<DB_FRIEND_SPEC_BIT> ();
2444 }
2445
2446 public:
2447 /* We set these bit outside of depset. */
2448 void set_hidden_binding ()
2449 {
2450 set_flag_bit<DB_HIDDEN_BIT> ();
2451 }
2452 void clear_hidden_binding ()
2453 {
2454 clear_flag_bit<DB_HIDDEN_BIT> ();
2455 }
2456
2457 public:
2458 bool is_special () const
2459 {
2460 return get_flag_bit<DB_SPECIAL_BIT> ();
2461 }
2462 void set_special ()
2463 {
2464 set_flag_bit<DB_SPECIAL_BIT> ();
2465 }
2466
2467 public:
2468 tree get_entity () const
2469 {
2470 return entity;
2471 }
2472 tree get_name () const
2473 {
2474 gcc_checking_assert (is_binding ());
2475 return reinterpret_cast <tree> (discriminator);
2476 }
2477
2478 public:
2479 /* Traits for a hash table of pointers to bindings. */
2480 struct traits {
2481 /* Each entry is a pointer to a depset. */
2482 typedef depset *value_type;
2483 /* We lookup by container:maybe-identifier pair. */
2484 typedef std::pair<tree,tree> compare_type;
2485
2486 static const bool empty_zero_p = true;
2487
2488 /* hash and equality for compare_type. */
2489 inline static hashval_t hash (const compare_type &p)
2490 {
2491 hashval_t h = pointer_hash<tree_node>::hash (p.first);
2492 if (p.second)
2493 {
2494 hashval_t nh = IDENTIFIER_HASH_VALUE (p.second);
2495 h = iterative_hash_hashval_t (h, nh);
2496 }
2497 return h;
2498 }
2499 inline static bool equal (const value_type b, const compare_type &p)
2500 {
2501 if (b->entity != p.first)
2502 return false;
2503
2504 if (p.second)
2505 return b->discriminator == reinterpret_cast<uintptr_t> (p.second);
2506 else
2507 return !b->is_binding ();
2508 }
2509
2510 /* (re)hasher for a binding itself. */
2511 inline static hashval_t hash (const value_type b)
2512 {
2513 hashval_t h = pointer_hash<tree_node>::hash (b->entity);
2514 if (b->is_binding ())
2515 {
2516 hashval_t nh = IDENTIFIER_HASH_VALUE (b->get_name ());
2517 h = iterative_hash_hashval_t (h, nh);
2518 }
2519 return h;
2520 }
2521
2522 /* Empty via NULL. */
2523 static inline void mark_empty (value_type &p) {p = NULL;}
2524 static inline bool is_empty (value_type p) {return !p;}
2525
2526 /* Nothing is deletable. Everything is insertable. */
2527 static bool is_deleted (value_type) { return false; }
2528 static void mark_deleted (value_type) { gcc_unreachable (); }
2529
2530 /* We own the entities in the hash table. */
2531 static void remove (value_type p)
2532 {
2533 delete (p);
2534 }
2535 };
2536
2537 public:
2538 class hash : public hash_table<traits> {
2539 typedef traits::compare_type key_t;
2540 typedef hash_table<traits> parent;
2541
2542 public:
2543 vec<depset *> worklist; /* Worklist of decls to walk. */
2544 hash *chain; /* Original table. */
2545 depset *current; /* Current depset being depended. */
2546 unsigned section; /* When writing out, the section. */
2547 bool reached_unreached; /* We reached an unreached entity. */
2548
2549 public:
2550 hash (size_t size, hash *c = NULL)
2551 : parent (size), chain (c), current (NULL), section (0),
2552 reached_unreached (false)
2553 {
2554 worklist.create (size);
2555 }
2556 ~hash ()
2557 {
2558 worklist.release ();
2559 }
2560
2561 public:
2562 bool is_key_order () const
2563 {
2564 return chain != NULL;
2565 }
2566
2567 private:
2568 depset **entity_slot (tree entity, bool = true);
2569 depset **binding_slot (tree ctx, tree name, bool = true);
2570 depset *maybe_add_declaration (tree decl);
2571
2572 public:
2573 depset *find_dependency (tree entity);
2574 depset *find_binding (tree ctx, tree name);
2575 depset *make_dependency (tree decl, entity_kind);
2576 void add_dependency (depset *);
2577
2578 public:
2579 void add_mergeable (depset *);
2580 depset *add_dependency (tree decl, entity_kind);
2581 void add_namespace_context (depset *, tree ns);
2582
2583 private:
2584 static bool add_binding_entity (tree, WMB_Flags, void *);
2585
2586 public:
2587 bool add_namespace_entities (tree ns, bitmap partitions);
2588 void add_specializations (bool decl_p);
2589 void add_partial_entities (vec<tree, va_gc> *);
2590 void add_class_entities (vec<tree, va_gc> *);
2591
2592 public:
2593 void find_dependencies (module_state *);
2594 bool finalize_dependencies ();
2595 vec<depset *> connect ();
2596 };
2597
2598 public:
2599 struct tarjan {
2600 vec<depset *> result;
2601 vec<depset *> stack;
2602 unsigned index;
2603
2604 tarjan (unsigned size)
2605 : index (0)
2606 {
2607 result.create (size);
2608 stack.create (50);
2609 }
2610 ~tarjan ()
2611 {
2612 gcc_assert (!stack.length ());
2613 stack.release ();
2614 }
2615
2616 public:
2617 void connect (depset *);
2618 };
2619 };
2620
2621 inline
2622 depset::depset (tree entity)
2623 :entity (entity), discriminator (0), cluster (0), section (0)
2624 {
2625 deps.create (0);
2626 }
2627
2628 inline
2629 depset::~depset ()
2630 {
2631 deps.release ();
2632 }
2633
2634 const char *
2635 depset::entity_kind_name () const
2636 {
2637 /* Same order as entity_kind. */
2638 static const char *const names[] =
2639 {"decl", "specialization", "partial", "using",
2640 "namespace", "redirect", "binding"};
2641 entity_kind kind = get_entity_kind ();
2642 gcc_checking_assert (kind < ARRAY_SIZE (names));
2643 return names[kind];
2644 }
2645
2646 /* Create a depset for a namespace binding NS::NAME. */
2647
2648 depset *depset::make_binding (tree ns, tree name)
2649 {
2650 depset *binding = new depset (ns);
2651
2652 binding->discriminator = reinterpret_cast <uintptr_t> (name);
2653
2654 return binding;
2655 }
2656
2657 depset *depset::make_entity (tree entity, entity_kind ek, bool is_defn)
2658 {
2659 depset *r = new depset (entity);
2660
2661 r->discriminator = ((1 << DB_ZERO_BIT)
2662 | (ek << DB_KIND_BIT)
2663 | is_defn << DB_DEFN_BIT);
2664
2665 return r;
2666 }
2667
2668 class pending_key
2669 {
2670 public:
2671 tree ns;
2672 tree id;
2673 };
2674
2675 template<>
2676 struct default_hash_traits<pending_key>
2677 {
2678 using value_type = pending_key;
2679
2680 static const bool empty_zero_p = false;
2681 static hashval_t hash (const value_type &k)
2682 {
2683 hashval_t h = IDENTIFIER_HASH_VALUE (k.id);
2684 h = iterative_hash_hashval_t (DECL_UID (k.ns), h);
2685
2686 return h;
2687 }
2688 static bool equal (const value_type &k, const value_type &l)
2689 {
2690 return k.ns == l.ns && k.id == l.id;
2691 }
2692 static void mark_empty (value_type &k)
2693 {
2694 k.ns = k.id = NULL_TREE;
2695 }
2696 static void mark_deleted (value_type &k)
2697 {
2698 k.ns = NULL_TREE;
2699 gcc_checking_assert (k.id);
2700 }
2701 static bool is_empty (const value_type &k)
2702 {
2703 return k.ns == NULL_TREE && k.id == NULL_TREE;
2704 }
2705 static bool is_deleted (const value_type &k)
2706 {
2707 return k.ns == NULL_TREE && k.id != NULL_TREE;
2708 }
2709 static void remove (value_type &)
2710 {
2711 }
2712 };
2713
2714 typedef hash_map<pending_key, auto_vec<unsigned>> pending_map_t;
2715
2716 /* Not-loaded entities that are keyed to a namespace-scope
2717 identifier. See module_state::write_pendings for details. */
2718 pending_map_t *pending_table;
2719
2720 /* Decls that need some post processing once a batch of lazy loads has
2721 completed. */
2722 vec<tree, va_heap, vl_embed> *post_load_decls;
2723
2724 /* Some entities are keyed to another entitity for ODR purposes.
2725 For example, at namespace scope, 'inline auto var = []{};', that
2726 lambda is keyed to 'var', and follows its ODRness. */
2727 typedef hash_map<tree, auto_vec<tree>> keyed_map_t;
2728 static keyed_map_t *keyed_table;
2729
2730 /* Instantiations of temploid friends imported from another module
2731 need to be attached to the same module as the temploid. This maps
2732 these decls to the temploid they are instantiated them, as there is
2733 no other easy way to get this information. */
2734 static GTY((cache)) decl_tree_cache_map *imported_temploid_friends;
2735
2736 /********************************************************************/
2737 /* Tree streaming. The tree streaming is very specific to the tree
2738 structures themselves. A tag indicates the kind of tree being
2739 streamed. -ve tags indicate backreferences to already-streamed
2740 trees. Backreferences are auto-numbered. */
2741
2742 /* Tree tags. */
2743 enum tree_tag {
2744 tt_null, /* NULL_TREE. */
2745 tt_fixed, /* Fixed vector index. */
2746
2747 tt_node, /* By-value node. */
2748 tt_decl, /* By-value mergeable decl. */
2749 tt_tpl_parm, /* Template parm. */
2750
2751 /* The ordering of the following 4 is relied upon in
2752 trees_out::tree_node. */
2753 tt_id, /* Identifier node. */
2754 tt_conv_id, /* Conversion operator name. */
2755 tt_anon_id, /* Anonymous name. */
2756 tt_lambda_id, /* Lambda name. */
2757
2758 tt_typedef_type, /* A (possibly implicit) typedefed type. */
2759 tt_derived_type, /* A type derived from another type. */
2760 tt_variant_type, /* A variant of another type. */
2761
2762 tt_tinfo_var, /* Typeinfo object. */
2763 tt_tinfo_typedef, /* Typeinfo typedef. */
2764 tt_ptrmem_type, /* Pointer to member type. */
2765 tt_nttp_var, /* NTTP_OBJECT VAR_DECL. */
2766
2767 tt_parm, /* Function parameter or result. */
2768 tt_enum_value, /* An enum value. */
2769 tt_enum_decl, /* An enum decl. */
2770 tt_data_member, /* Data member/using-decl. */
2771
2772 tt_binfo, /* A BINFO. */
2773 tt_vtable, /* A vtable. */
2774 tt_thunk, /* A thunk. */
2775 tt_clone_ref,
2776
2777 tt_entity, /* A extra-cluster entity. */
2778
2779 tt_template, /* The TEMPLATE_RESULT of a template. */
2780 };
2781
2782 enum walk_kind {
2783 WK_none, /* No walk to do (a back- or fixed-ref happened). */
2784 WK_normal, /* Normal walk (by-name if possible). */
2785
2786 WK_value, /* By-value walk. */
2787 };
2788
2789 enum merge_kind
2790 {
2791 MK_unique, /* Known unique. */
2792 MK_named, /* Found by CTX, NAME + maybe_arg types etc. */
2793 MK_field, /* Found by CTX and index on TYPE_FIELDS */
2794 MK_vtable, /* Found by CTX and index on TYPE_VTABLES */
2795 MK_as_base, /* Found by CTX. */
2796
2797 MK_partial,
2798
2799 MK_enum, /* Found by CTX, & 1stMemberNAME. */
2800 MK_keyed, /* Found by key & index. */
2801 MK_local_type, /* Found by CTX, index. */
2802
2803 MK_friend_spec, /* Like named, but has a tmpl & args too. */
2804 MK_local_friend, /* Found by CTX, index. */
2805
2806 MK_indirect_lwm = MK_enum,
2807
2808 /* Template specialization kinds below. These are all found via
2809 primary template and specialization args. */
2810 MK_template_mask = 0x10, /* A template specialization. */
2811
2812 MK_tmpl_decl_mask = 0x4, /* In decl table. */
2813
2814 MK_tmpl_tmpl_mask = 0x1, /* We want TEMPLATE_DECL. */
2815
2816 MK_type_spec = MK_template_mask,
2817 MK_decl_spec = MK_template_mask | MK_tmpl_decl_mask,
2818
2819 MK_hwm = 0x20
2820 };
2821 /* This is more than a debugging array. NULLs are used to determine
2822 an invalid merge_kind number. */
2823 static char const *const merge_kind_name[MK_hwm] =
2824 {
2825 "unique", "named", "field", "vtable", /* 0...3 */
2826 "asbase", "partial", "enum", "attached", /* 4...7 */
2827
2828 "local type", "friend spec", "local friend", NULL, /* 8...11 */
2829 NULL, NULL, NULL, NULL,
2830
2831 "type spec", "type tmpl spec", /* 16,17 type (template). */
2832 NULL, NULL,
2833
2834 "decl spec", "decl tmpl spec", /* 20,21 decl (template). */
2835 NULL, NULL,
2836 NULL, NULL, NULL, NULL,
2837 NULL, NULL, NULL, NULL,
2838 };
2839
2840 /* Mergeable entity location data. */
2841 struct merge_key {
2842 cp_ref_qualifier ref_q : 2;
2843 unsigned index;
2844
2845 tree ret; /* Return type, if appropriate. */
2846 tree args; /* Arg types, if appropriate. */
2847
2848 tree constraints; /* Constraints. */
2849
2850 merge_key ()
2851 :ref_q (REF_QUAL_NONE), index (0),
2852 ret (NULL_TREE), args (NULL_TREE),
2853 constraints (NULL_TREE)
2854 {
2855 }
2856 };
2857
2858 /* Hashmap of merged duplicates. Usually decls, but can contain
2859 BINFOs. */
2860 typedef hash_map<tree,uintptr_t,
2861 simple_hashmap_traits<nodel_ptr_hash<tree_node>,uintptr_t> >
2862 duplicate_hash_map;
2863
2864 /* Data needed for post-processing. */
2865 struct post_process_data {
2866 tree decl;
2867 location_t start_locus;
2868 location_t end_locus;
2869 };
2870
2871 /* Tree stream reader. Note that reading a stream doesn't mark the
2872 read trees with TREE_VISITED. Thus it's quite safe to have
2873 multiple concurrent readers. Which is good, because lazy
2874 loading.
2875
2876 It's important that trees_in/out have internal linkage so that the
2877 compiler knows core_bools, lang_type_bools and lang_decl_bools have
2878 only a single caller (tree_node_bools) and inlines them appropriately. */
2879 namespace {
2880 class trees_in : public bytes_in {
2881 typedef bytes_in parent;
2882
2883 private:
2884 module_state *state; /* Module being imported. */
2885 vec<tree> back_refs; /* Back references. */
2886 duplicate_hash_map *duplicates; /* Map from existings to duplicate. */
2887 vec<post_process_data> post_decls; /* Decls to post process. */
2888 unsigned unused; /* Inhibit any interior TREE_USED
2889 marking. */
2890
2891 public:
2892 trees_in (module_state *);
2893 ~trees_in ();
2894
2895 public:
2896 int insert (tree);
2897 tree back_ref (int);
2898
2899 private:
2900 tree start (unsigned = 0);
2901
2902 public:
2903 /* Needed for binfo writing */
2904 bool core_bools (tree, bits_in&);
2905
2906 private:
2907 /* Stream tree_core, lang_decl_specific and lang_type_specific
2908 bits. */
2909 bool core_vals (tree);
2910 bool lang_type_bools (tree, bits_in&);
2911 bool lang_type_vals (tree);
2912 bool lang_decl_bools (tree, bits_in&);
2913 bool lang_decl_vals (tree);
2914 bool lang_vals (tree);
2915 bool tree_node_bools (tree);
2916 bool tree_node_vals (tree);
2917 tree tree_value ();
2918 tree decl_value ();
2919 tree tpl_parm_value ();
2920
2921 private:
2922 tree chained_decls (); /* Follow DECL_CHAIN. */
2923 vec<tree, va_heap> *vec_chained_decls ();
2924 vec<tree, va_gc> *tree_vec (); /* vec of tree. */
2925 vec<tree_pair_s, va_gc> *tree_pair_vec (); /* vec of tree_pair. */
2926 tree tree_list (bool has_purpose);
2927
2928 public:
2929 /* Read a tree node. */
2930 tree tree_node (bool is_use = false);
2931
2932 private:
2933 bool install_entity (tree decl);
2934 tree tpl_parms (unsigned &tpl_levels);
2935 bool tpl_parms_fini (tree decl, unsigned tpl_levels);
2936 bool tpl_header (tree decl, unsigned *tpl_levels);
2937 int fn_parms_init (tree);
2938 void fn_parms_fini (int tag, tree fn, tree existing, bool has_defn);
2939 unsigned add_indirect_tpl_parms (tree);
2940 public:
2941 bool add_indirects (tree);
2942
2943 public:
2944 /* Serialize various definitions. */
2945 bool read_definition (tree decl);
2946
2947 private:
2948 bool is_matching_decl (tree existing, tree decl, bool is_typedef);
2949 static bool install_implicit_member (tree decl);
2950 bool read_function_def (tree decl, tree maybe_template);
2951 bool read_var_def (tree decl, tree maybe_template);
2952 bool read_class_def (tree decl, tree maybe_template);
2953 bool read_enum_def (tree decl, tree maybe_template);
2954
2955 public:
2956 tree decl_container ();
2957 tree key_mergeable (int tag, merge_kind, tree decl, tree inner, tree type,
2958 tree container, bool is_attached);
2959 unsigned binfo_mergeable (tree *);
2960
2961 private:
2962 tree key_local_type (const merge_key&, tree, tree);
2963 uintptr_t *find_duplicate (tree existing);
2964 void register_duplicate (tree decl, tree existing);
2965 /* Mark as an already diagnosed bad duplicate. */
2966 void unmatched_duplicate (tree existing)
2967 {
2968 *find_duplicate (existing) |= 1;
2969 }
2970
2971 public:
2972 bool is_duplicate (tree decl)
2973 {
2974 return find_duplicate (decl) != NULL;
2975 }
2976 tree maybe_duplicate (tree decl)
2977 {
2978 if (uintptr_t *dup = find_duplicate (decl))
2979 return reinterpret_cast<tree> (*dup & ~uintptr_t (1));
2980 return decl;
2981 }
2982 tree odr_duplicate (tree decl, bool has_defn);
2983
2984 public:
2985 /* Return the decls to postprocess. */
2986 const vec<post_process_data>& post_process ()
2987 {
2988 return post_decls;
2989 }
2990 private:
2991 /* Register DATA for postprocessing. */
2992 void post_process (post_process_data data)
2993 {
2994 post_decls.safe_push (data);
2995 }
2996
2997 private:
2998 void assert_definition (tree, bool installing);
2999 };
3000 } // anon namespace
3001
3002 trees_in::trees_in (module_state *state)
3003 :parent (), state (state), unused (0)
3004 {
3005 duplicates = NULL;
3006 back_refs.create (500);
3007 post_decls.create (0);
3008 }
3009
3010 trees_in::~trees_in ()
3011 {
3012 delete (duplicates);
3013 back_refs.release ();
3014 post_decls.release ();
3015 }
3016
3017 /* Tree stream writer. */
3018 namespace {
3019 class trees_out : public bytes_out {
3020 typedef bytes_out parent;
3021
3022 private:
3023 module_state *state; /* The module we are writing. */
3024 ptr_int_hash_map tree_map; /* Trees to references */
3025 depset::hash *dep_hash; /* Dependency table. */
3026 int ref_num; /* Back reference number. */
3027 unsigned section;
3028 #if CHECKING_P
3029 int importedness; /* Checker that imports not occurring
3030 inappropriately. +ve imports ok,
3031 -ve imports not ok. */
3032 #endif
3033
3034 public:
3035 trees_out (allocator *, module_state *, depset::hash &deps, unsigned sec = 0);
3036 ~trees_out ();
3037
3038 private:
3039 void mark_trees ();
3040 void unmark_trees ();
3041
3042 public:
3043 /* Hey, let's ignore the well known STL iterator idiom. */
3044 void begin ();
3045 unsigned end (elf_out *sink, unsigned name, unsigned *crc_ptr);
3046 void end ();
3047
3048 public:
3049 enum tags
3050 {
3051 tag_backref = -1, /* Upper bound on the backrefs. */
3052 tag_value = 0, /* Write by value. */
3053 tag_fixed /* Lower bound on the fixed trees. */
3054 };
3055
3056 public:
3057 bool is_key_order () const
3058 {
3059 return dep_hash->is_key_order ();
3060 }
3061
3062 public:
3063 int insert (tree, walk_kind = WK_normal);
3064
3065 private:
3066 void start (tree, bool = false);
3067
3068 private:
3069 walk_kind ref_node (tree);
3070 public:
3071 int get_tag (tree);
3072 void set_importing (int i ATTRIBUTE_UNUSED)
3073 {
3074 #if CHECKING_P
3075 importedness = i;
3076 #endif
3077 }
3078
3079 private:
3080 void core_bools (tree, bits_out&);
3081 void core_vals (tree);
3082 void lang_type_bools (tree, bits_out&);
3083 void lang_type_vals (tree);
3084 void lang_decl_bools (tree, bits_out&);
3085 void lang_decl_vals (tree);
3086 void lang_vals (tree);
3087 void tree_node_bools (tree);
3088 void tree_node_vals (tree);
3089
3090 private:
3091 void chained_decls (tree);
3092 void vec_chained_decls (tree);
3093 void tree_vec (vec<tree, va_gc> *);
3094 void tree_pair_vec (vec<tree_pair_s, va_gc> *);
3095 void tree_list (tree, bool has_purpose);
3096
3097 public:
3098 /* Mark a node for by-value walking. */
3099 void mark_by_value (tree);
3100
3101 public:
3102 void tree_node (tree);
3103
3104 private:
3105 void install_entity (tree decl, depset *);
3106 void tpl_parms (tree parms, unsigned &tpl_levels);
3107 void tpl_parms_fini (tree decl, unsigned tpl_levels);
3108 void fn_parms_fini (tree) {}
3109 unsigned add_indirect_tpl_parms (tree);
3110 public:
3111 void add_indirects (tree);
3112 void fn_parms_init (tree);
3113 void tpl_header (tree decl, unsigned *tpl_levels);
3114
3115 public:
3116 merge_kind get_merge_kind (tree decl, depset *maybe_dep);
3117 tree decl_container (tree decl);
3118 void key_mergeable (int tag, merge_kind, tree decl, tree inner,
3119 tree container, depset *maybe_dep);
3120 void binfo_mergeable (tree binfo);
3121
3122 private:
3123 void key_local_type (merge_key&, tree, tree);
3124 bool decl_node (tree, walk_kind ref);
3125 void type_node (tree);
3126 void tree_value (tree);
3127 void tpl_parm_value (tree);
3128
3129 public:
3130 void decl_value (tree, depset *);
3131
3132 public:
3133 /* Serialize various definitions. */
3134 void write_definition (tree decl);
3135 void mark_declaration (tree decl, bool do_defn);
3136
3137 private:
3138 void mark_function_def (tree decl);
3139 void mark_var_def (tree decl);
3140 void mark_class_def (tree decl);
3141 void mark_enum_def (tree decl);
3142 void mark_class_member (tree decl, bool do_defn = true);
3143 void mark_binfos (tree type);
3144
3145 private:
3146 void write_var_def (tree decl);
3147 void write_function_def (tree decl);
3148 void write_class_def (tree decl);
3149 void write_enum_def (tree decl);
3150
3151 private:
3152 static void assert_definition (tree);
3153
3154 public:
3155 static void instrument ();
3156
3157 private:
3158 /* Tree instrumentation. */
3159 static unsigned tree_val_count;
3160 static unsigned decl_val_count;
3161 static unsigned back_ref_count;
3162 static unsigned null_count;
3163 };
3164 } // anon namespace
3165
3166 /* Instrumentation counters. */
3167 unsigned trees_out::tree_val_count;
3168 unsigned trees_out::decl_val_count;
3169 unsigned trees_out::back_ref_count;
3170 unsigned trees_out::null_count;
3171
3172 trees_out::trees_out (allocator *mem, module_state *state, depset::hash &deps,
3173 unsigned section)
3174 :parent (mem), state (state), tree_map (500),
3175 dep_hash (&deps), ref_num (0), section (section)
3176 {
3177 #if CHECKING_P
3178 importedness = 0;
3179 #endif
3180 }
3181
3182 trees_out::~trees_out ()
3183 {
3184 }
3185
3186 /********************************************************************/
3187 /* Location. We're aware of the line-map concept and reproduce it
3188 here. Each imported module allocates a contiguous span of ordinary
3189 maps, and of macro maps. adhoc maps are serialized by contents,
3190 not pre-allocated. The scattered linemaps of a module are
3191 coalesced when writing. */
3192
3193
3194 /* I use half-open [first,second) ranges. */
3195 typedef std::pair<unsigned,unsigned> range_t;
3196
3197 /* A range of locations. */
3198 typedef std::pair<location_t,location_t> loc_range_t;
3199
3200 /* Spans of the line maps that are occupied by this TU. I.e. not
3201 within imports. Only extended when in an interface unit.
3202 Interval zero corresponds to the forced header linemap(s). This
3203 is a singleton object. */
3204
3205 class loc_spans {
3206 public:
3207 /* An interval of line maps. The line maps here represent a contiguous
3208 non-imported range. */
3209 struct span {
3210 loc_range_t ordinary; /* Ordinary map location range. */
3211 loc_range_t macro; /* Macro map location range. */
3212 int ordinary_delta; /* Add to ordinary loc to get serialized loc. */
3213 int macro_delta; /* Likewise for macro loc. */
3214 };
3215
3216 private:
3217 vec<span> *spans;
3218
3219 public:
3220 loc_spans ()
3221 /* Do not preallocate spans, as that causes
3222 --enable-detailed-mem-stats problems. */
3223 : spans (nullptr)
3224 {
3225 }
3226 ~loc_spans ()
3227 {
3228 delete spans;
3229 }
3230
3231 public:
3232 span &operator[] (unsigned ix)
3233 {
3234 return (*spans)[ix];
3235 }
3236 unsigned length () const
3237 {
3238 return spans->length ();
3239 }
3240
3241 public:
3242 bool init_p () const
3243 {
3244 return spans != nullptr;
3245 }
3246 /* Initializer. */
3247 void init (const line_maps *lmaps, const line_map_ordinary *map);
3248
3249 /* Slightly skewed preprocessed files can cause us to miss an
3250 initialization in some places. Fallback initializer. */
3251 void maybe_init ()
3252 {
3253 if (!init_p ())
3254 init (line_table, nullptr);
3255 }
3256
3257 public:
3258 enum {
3259 SPAN_RESERVED = 0, /* Reserved (fixed) locations. */
3260 SPAN_FIRST = 1, /* LWM of locations to stream */
3261 SPAN_MAIN = 2 /* Main file and onwards. */
3262 };
3263
3264 public:
3265 location_t main_start () const
3266 {
3267 return (*spans)[SPAN_MAIN].ordinary.first;
3268 }
3269
3270 public:
3271 void open (location_t);
3272 void close ();
3273
3274 public:
3275 /* Propagate imported linemaps to us, if needed. */
3276 bool maybe_propagate (module_state *import, location_t loc);
3277
3278 public:
3279 const span *ordinary (location_t);
3280 const span *macro (location_t);
3281 };
3282
3283 static loc_spans spans;
3284
3285 /* Information about ordinary locations we stream out. */
3286 struct ord_loc_info
3287 {
3288 const line_map_ordinary *src; // line map we're based on
3289 unsigned offset; // offset to this line
3290 unsigned span; // number of locs we span
3291 unsigned remap; // serialization
3292
3293 static int compare (const void *a_, const void *b_)
3294 {
3295 auto *a = static_cast<const ord_loc_info *> (a_);
3296 auto *b = static_cast<const ord_loc_info *> (b_);
3297
3298 if (a->src != b->src)
3299 return a->src < b->src ? -1 : +1;
3300
3301 // Ensure no overlap
3302 gcc_checking_assert (a->offset + a->span <= b->offset
3303 || b->offset + b->span <= a->offset);
3304
3305 gcc_checking_assert (a->offset != b->offset);
3306 return a->offset < b->offset ? -1 : +1;
3307 }
3308 };
3309 struct ord_loc_traits
3310 {
3311 typedef ord_loc_info value_type;
3312 typedef value_type compare_type;
3313
3314 static const bool empty_zero_p = false;
3315
3316 static hashval_t hash (const value_type &v)
3317 {
3318 auto h = pointer_hash<const line_map_ordinary>::hash (v.src);
3319 return iterative_hash_hashval_t (v.offset, h);
3320 }
3321 static bool equal (const value_type &v, const compare_type p)
3322 {
3323 return v.src == p.src && v.offset == p.offset;
3324 }
3325
3326 static void mark_empty (value_type &v)
3327 {
3328 v.src = nullptr;
3329 }
3330 static bool is_empty (value_type &v)
3331 {
3332 return !v.src;
3333 }
3334
3335 static bool is_deleted (value_type &) { return false; }
3336 static void mark_deleted (value_type &) { gcc_unreachable (); }
3337
3338 static void remove (value_type &) {}
3339 };
3340 /* Table keyed by ord_loc_info, used for noting. */
3341 static hash_table<ord_loc_traits> *ord_loc_table;
3342 /* Sorted vector, used for writing. */
3343 static vec<ord_loc_info> *ord_loc_remap;
3344
3345 /* Information about macro locations we stream out. */
3346 struct macro_loc_info
3347 {
3348 const line_map_macro *src; // original expansion
3349 unsigned remap; // serialization
3350
3351 static int compare (const void *a_, const void *b_)
3352 {
3353 auto *a = static_cast<const macro_loc_info *> (a_);
3354 auto *b = static_cast<const macro_loc_info *> (b_);
3355
3356 gcc_checking_assert (MAP_START_LOCATION (a->src)
3357 != MAP_START_LOCATION (b->src));
3358 if (MAP_START_LOCATION (a->src) < MAP_START_LOCATION (b->src))
3359 return -1;
3360 else
3361 return +1;
3362 }
3363 };
3364 struct macro_loc_traits
3365 {
3366 typedef macro_loc_info value_type;
3367 typedef const line_map_macro *compare_type;
3368
3369 static const bool empty_zero_p = false;
3370
3371 static hashval_t hash (compare_type p)
3372 {
3373 return pointer_hash<const line_map_macro>::hash (p);
3374 }
3375 static hashval_t hash (const value_type &v)
3376 {
3377 return hash (v.src);
3378 }
3379 static bool equal (const value_type &v, const compare_type p)
3380 {
3381 return v.src == p;
3382 }
3383
3384 static void mark_empty (value_type &v)
3385 {
3386 v.src = nullptr;
3387 }
3388 static bool is_empty (value_type &v)
3389 {
3390 return !v.src;
3391 }
3392
3393 static bool is_deleted (value_type &) { return false; }
3394 static void mark_deleted (value_type &) { gcc_unreachable (); }
3395
3396 static void remove (value_type &) {}
3397 };
3398 /* Table keyed by line_map_macro, used for noting. */
3399 static hash_table<macro_loc_traits> *macro_loc_table;
3400 /* Sorted vector, used for writing. */
3401 static vec<macro_loc_info> *macro_loc_remap;
3402
3403 /* Indirection to allow bsearching imports by ordinary location. */
3404 static vec<module_state *> *ool;
3405
3406 /********************************************************************/
3407 /* Data needed by a module during the process of loading. */
3408 struct GTY(()) slurping {
3409
3410 /* Remap import's module numbering to our numbering. Values are
3411 shifted by 1. Bit0 encodes if the import is direct. */
3412 vec<unsigned, va_heap, vl_embed> *
3413 GTY((skip)) remap; /* Module owner remapping. */
3414
3415 elf_in *GTY((skip)) from; /* The elf loader. */
3416
3417 /* This map is only for header imports themselves -- the global
3418 headers bitmap hold it for the current TU. */
3419 bitmap headers; /* Transitive set of direct imports, including
3420 self. Used for macro visibility and
3421 priority. */
3422
3423 /* These objects point into the mmapped area, unless we're not doing
3424 that, or we got frozen or closed. In those cases they point to
3425 buffers we own. */
3426 bytes_in macro_defs; /* Macro definitions. */
3427 bytes_in macro_tbl; /* Macro table. */
3428
3429 /* Location remapping. first->ordinary, second->macro. */
3430 range_t GTY((skip)) loc_deltas;
3431
3432 unsigned current; /* Section currently being loaded. */
3433 unsigned remaining; /* Number of lazy sections yet to read. */
3434 unsigned lru; /* An LRU counter. */
3435
3436 public:
3437 slurping (elf_in *);
3438 ~slurping ();
3439
3440 public:
3441 /* Close the ELF file, if it's open. */
3442 void close ()
3443 {
3444 if (from)
3445 {
3446 from->end ();
3447 delete from;
3448 from = NULL;
3449 }
3450 }
3451
3452 public:
3453 void release_macros ();
3454
3455 public:
3456 void alloc_remap (unsigned size)
3457 {
3458 gcc_assert (!remap);
3459 vec_safe_reserve (remap, size);
3460 for (unsigned ix = size; ix--;)
3461 remap->quick_push (0);
3462 }
3463 unsigned remap_module (unsigned owner)
3464 {
3465 if (owner < remap->length ())
3466 return (*remap)[owner] >> 1;
3467 return 0;
3468 }
3469
3470 public:
3471 /* GC allocation. But we must explicitly delete it. */
3472 static void *operator new (size_t x)
3473 {
3474 return ggc_alloc_atomic (x);
3475 }
3476 static void operator delete (void *p)
3477 {
3478 ggc_free (p);
3479 }
3480 };
3481
3482 slurping::slurping (elf_in *from)
3483 : remap (NULL), from (from),
3484 headers (BITMAP_GGC_ALLOC ()), macro_defs (), macro_tbl (),
3485 loc_deltas (0, 0),
3486 current (~0u), remaining (0), lru (0)
3487 {
3488 }
3489
3490 slurping::~slurping ()
3491 {
3492 vec_free (remap);
3493 remap = NULL;
3494 release_macros ();
3495 close ();
3496 }
3497
3498 void slurping::release_macros ()
3499 {
3500 if (macro_defs.size)
3501 elf_in::release (from, macro_defs);
3502 if (macro_tbl.size)
3503 elf_in::release (from, macro_tbl);
3504 }
3505
3506 /* Flags for extensions that end up being streamed. */
3507
3508 enum streamed_extensions {
3509 SE_OPENMP = 1 << 0,
3510 SE_BITS = 1
3511 };
3512
3513 /* Counter indices. */
3514 enum module_state_counts
3515 {
3516 MSC_sec_lwm,
3517 MSC_sec_hwm,
3518 MSC_pendings,
3519 MSC_entities,
3520 MSC_namespaces,
3521 MSC_bindings,
3522 MSC_macros,
3523 MSC_inits,
3524 MSC_HWM
3525 };
3526
3527 /********************************************************************/
3528 struct module_state_config;
3529
3530 /* Increasing levels of loadedness. */
3531 enum module_loadedness {
3532 ML_NONE, /* Not loaded. */
3533 ML_CONFIG, /* Config loaed. */
3534 ML_PREPROCESSOR, /* Preprocessor loaded. */
3535 ML_LANGUAGE, /* Language loaded. */
3536 };
3537
3538 /* Increasing levels of directness (toplevel) of import. */
3539 enum module_directness {
3540 MD_NONE, /* Not direct. */
3541 MD_PARTITION_DIRECT, /* Direct import of a partition. */
3542 MD_DIRECT, /* Direct import. */
3543 MD_PURVIEW_DIRECT, /* direct import in purview. */
3544 };
3545
3546 /* State of a particular module. */
3547
3548 class GTY((chain_next ("%h.parent"), for_user)) module_state {
3549 public:
3550 /* We always import & export ourselves. */
3551 bitmap imports; /* Transitive modules we're importing. */
3552 bitmap exports; /* Subset of that, that we're exporting. */
3553
3554 module_state *parent;
3555 tree name; /* Name of the module. */
3556
3557 slurping *slurp; /* Data for loading. */
3558
3559 const char *flatname; /* Flatname of module. */
3560 char *filename; /* CMI Filename */
3561
3562 /* Indices into the entity_ary. */
3563 unsigned entity_lwm;
3564 unsigned entity_num;
3565
3566 /* Location ranges for this module. adhoc-locs are decomposed, so
3567 don't have a range. */
3568 loc_range_t GTY((skip)) ordinary_locs;
3569 loc_range_t GTY((skip)) macro_locs; // [lwm,num)
3570
3571 /* LOC is first set too the importing location. When initially
3572 loaded it refers to a module loc whose parent is the importing
3573 location. */
3574 location_t loc; /* Location referring to module itself. */
3575 unsigned crc; /* CRC we saw reading it in. */
3576
3577 unsigned mod; /* Module owner number. */
3578 unsigned remap; /* Remapping during writing. */
3579
3580 unsigned short subst; /* Mangle subst if !0. */
3581
3582 /* How loaded this module is. */
3583 enum module_loadedness loadedness : 2;
3584
3585 bool module_p : 1; /* /The/ module of this TU. */
3586 bool header_p : 1; /* Is a header unit. */
3587 bool interface_p : 1; /* An interface. */
3588 bool partition_p : 1; /* A partition. */
3589
3590 /* How directly this module is imported. */
3591 enum module_directness directness : 2;
3592
3593 bool exported_p : 1; /* directness != MD_NONE && exported. */
3594 bool cmi_noted_p : 1; /* We've told the user about the CMI, don't
3595 do it again */
3596 bool active_init_p : 1; /* This module's global initializer needs
3597 calling. */
3598 bool inform_cmi_p : 1; /* Inform of a read/write. */
3599 bool visited_p : 1; /* A walk-once flag. */
3600 /* Record extensions emitted or permitted. */
3601 unsigned extensions : SE_BITS;
3602 /* 14 bits used, 2 bits remain */
3603
3604 public:
3605 module_state (tree name, module_state *, bool);
3606 ~module_state ();
3607
3608 public:
3609 void release ()
3610 {
3611 imports = exports = NULL;
3612 slurped ();
3613 }
3614 void slurped ()
3615 {
3616 delete slurp;
3617 slurp = NULL;
3618 }
3619 elf_in *from () const
3620 {
3621 return slurp->from;
3622 }
3623
3624 public:
3625 /* Kind of this module. */
3626 bool is_module () const
3627 {
3628 return module_p;
3629 }
3630 bool is_header () const
3631 {
3632 return header_p;
3633 }
3634 bool is_interface () const
3635 {
3636 return interface_p;
3637 }
3638 bool is_partition () const
3639 {
3640 return partition_p;
3641 }
3642
3643 /* How this module is used in the current TU. */
3644 bool is_exported () const
3645 {
3646 return exported_p;
3647 }
3648 bool is_direct () const
3649 {
3650 return directness >= MD_DIRECT;
3651 }
3652 bool is_purview_direct () const
3653 {
3654 return directness == MD_PURVIEW_DIRECT;
3655 }
3656 bool is_partition_direct () const
3657 {
3658 return directness == MD_PARTITION_DIRECT;
3659 }
3660
3661 public:
3662 /* Is this a real module? */
3663 bool has_location () const
3664 {
3665 return loc != UNKNOWN_LOCATION;
3666 }
3667
3668 public:
3669 bool check_not_purview (location_t loc);
3670
3671 public:
3672 void mangle (bool include_partition);
3673
3674 public:
3675 void set_import (module_state const *, bool is_export);
3676 void announce (const char *) const;
3677
3678 public:
3679 /* Read and write module. */
3680 void write_begin (elf_out *to, cpp_reader *,
3681 module_state_config &, unsigned &crc);
3682 void write_end (elf_out *to, cpp_reader *,
3683 module_state_config &, unsigned &crc);
3684 bool read_initial (cpp_reader *);
3685 bool read_preprocessor (bool);
3686 bool read_language (bool);
3687
3688 public:
3689 /* Read a section. */
3690 bool load_section (unsigned snum, binding_slot *mslot);
3691 /* Lazily read a section. */
3692 bool lazy_load (unsigned index, binding_slot *mslot);
3693
3694 public:
3695 /* Juggle a limited number of file numbers. */
3696 static void freeze_an_elf ();
3697 bool maybe_defrost ();
3698
3699 public:
3700 void maybe_completed_reading ();
3701 bool check_read (bool outermost, bool ok);
3702
3703 private:
3704 /* The README, for human consumption. */
3705 void write_readme (elf_out *to, cpp_reader *, const char *dialect);
3706 void write_env (elf_out *to);
3707
3708 private:
3709 /* Import tables. */
3710 void write_imports (bytes_out &cfg, bool direct);
3711 unsigned read_imports (bytes_in &cfg, cpp_reader *, line_maps *maps);
3712
3713 private:
3714 void write_imports (elf_out *to, unsigned *crc_ptr);
3715 bool read_imports (cpp_reader *, line_maps *);
3716
3717 private:
3718 void write_partitions (elf_out *to, unsigned, unsigned *crc_ptr);
3719 bool read_partitions (unsigned);
3720
3721 private:
3722 void write_config (elf_out *to, struct module_state_config &, unsigned crc);
3723 bool read_config (struct module_state_config &);
3724 static void write_counts (elf_out *to, unsigned [MSC_HWM], unsigned *crc_ptr);
3725 bool read_counts (unsigned *);
3726
3727 public:
3728 void note_cmi_name ();
3729
3730 private:
3731 static unsigned write_bindings (elf_out *to, vec<depset *> depsets,
3732 unsigned *crc_ptr);
3733 bool read_bindings (unsigned count, unsigned lwm, unsigned hwm);
3734
3735 static void write_namespace (bytes_out &sec, depset *ns_dep);
3736 tree read_namespace (bytes_in &sec);
3737
3738 void write_namespaces (elf_out *to, vec<depset *> spaces,
3739 unsigned, unsigned *crc_ptr);
3740 bool read_namespaces (unsigned);
3741
3742 void intercluster_seed (trees_out &sec, unsigned index, depset *dep);
3743 unsigned write_cluster (elf_out *to, depset *depsets[], unsigned size,
3744 depset::hash &, unsigned *counts, unsigned *crc_ptr);
3745 bool read_cluster (unsigned snum);
3746
3747 private:
3748 unsigned write_inits (elf_out *to, depset::hash &, unsigned *crc_ptr);
3749 bool read_inits (unsigned count);
3750
3751 private:
3752 unsigned write_pendings (elf_out *to, vec<depset *> depsets,
3753 depset::hash &, unsigned *crc_ptr);
3754 bool read_pendings (unsigned count);
3755
3756 private:
3757 void write_entities (elf_out *to, vec<depset *> depsets,
3758 unsigned count, unsigned *crc_ptr);
3759 bool read_entities (unsigned count, unsigned lwm, unsigned hwm);
3760
3761 private:
3762 void write_init_maps ();
3763 range_t write_prepare_maps (module_state_config *, bool);
3764 bool read_prepare_maps (const module_state_config *);
3765
3766 void write_ordinary_maps (elf_out *to, range_t &,
3767 bool, unsigned *crc_ptr);
3768 bool read_ordinary_maps (unsigned, unsigned);
3769 void write_macro_maps (elf_out *to, range_t &, unsigned *crc_ptr);
3770 bool read_macro_maps (unsigned);
3771
3772 private:
3773 void write_define (bytes_out &, const cpp_macro *);
3774 cpp_macro *read_define (bytes_in &, cpp_reader *) const;
3775 vec<cpp_hashnode *> *prepare_macros (cpp_reader *);
3776 unsigned write_macros (elf_out *to, vec<cpp_hashnode *> *, unsigned *crc_ptr);
3777 bool read_macros ();
3778 void install_macros ();
3779
3780 public:
3781 void import_macros ();
3782
3783 public:
3784 static void undef_macro (cpp_reader *, location_t, cpp_hashnode *);
3785 static cpp_macro *deferred_macro (cpp_reader *, location_t, cpp_hashnode *);
3786
3787 public:
3788 static bool note_location (location_t);
3789 static void write_location (bytes_out &, location_t);
3790 location_t read_location (bytes_in &) const;
3791
3792 public:
3793 void set_flatname ();
3794 const char *get_flatname () const
3795 {
3796 return flatname;
3797 }
3798 location_t imported_from () const;
3799
3800 public:
3801 void set_filename (const Cody::Packet &);
3802 bool do_import (cpp_reader *, bool outermost);
3803 };
3804
3805 /* Hash module state by name. This cannot be a member of
3806 module_state, because of GTY restrictions. We never delete from
3807 the hash table, but ggc_ptr_hash doesn't support that
3808 simplification. */
3809
3810 struct module_state_hash : ggc_ptr_hash<module_state> {
3811 typedef std::pair<tree,uintptr_t> compare_type; /* {name,parent} */
3812
3813 static inline hashval_t hash (const value_type m);
3814 static inline hashval_t hash (const compare_type &n);
3815 static inline bool equal (const value_type existing,
3816 const compare_type &candidate);
3817 };
3818
3819 module_state::module_state (tree name, module_state *parent, bool partition)
3820 : imports (BITMAP_GGC_ALLOC ()), exports (BITMAP_GGC_ALLOC ()),
3821 parent (parent), name (name), slurp (NULL),
3822 flatname (NULL), filename (NULL),
3823 entity_lwm (~0u >> 1), entity_num (0),
3824 ordinary_locs (0, 0), macro_locs (0, 0),
3825 loc (UNKNOWN_LOCATION),
3826 crc (0), mod (MODULE_UNKNOWN), remap (0), subst (0)
3827 {
3828 loadedness = ML_NONE;
3829
3830 module_p = header_p = interface_p = partition_p = false;
3831
3832 directness = MD_NONE;
3833 exported_p = false;
3834
3835 cmi_noted_p = false;
3836 active_init_p = false;
3837
3838 partition_p = partition;
3839
3840 inform_cmi_p = false;
3841 visited_p = false;
3842
3843 extensions = 0;
3844 if (name && TREE_CODE (name) == STRING_CST)
3845 {
3846 header_p = true;
3847
3848 const char *string = TREE_STRING_POINTER (name);
3849 gcc_checking_assert (string[0] == '.'
3850 ? IS_DIR_SEPARATOR (string[1])
3851 : IS_ABSOLUTE_PATH (string));
3852 }
3853
3854 gcc_checking_assert (!(parent && header_p));
3855 }
3856
3857 module_state::~module_state ()
3858 {
3859 release ();
3860 }
3861
3862 /* Hash module state. */
3863 static hashval_t
3864 module_name_hash (const_tree name)
3865 {
3866 if (TREE_CODE (name) == STRING_CST)
3867 return htab_hash_string (TREE_STRING_POINTER (name));
3868 else
3869 return IDENTIFIER_HASH_VALUE (name);
3870 }
3871
3872 hashval_t
3873 module_state_hash::hash (const value_type m)
3874 {
3875 hashval_t ph = pointer_hash<void>::hash
3876 (reinterpret_cast<void *> (reinterpret_cast<uintptr_t> (m->parent)
3877 | m->is_partition ()));
3878 hashval_t nh = module_name_hash (m->name);
3879 return iterative_hash_hashval_t (ph, nh);
3880 }
3881
3882 /* Hash a name. */
3883 hashval_t
3884 module_state_hash::hash (const compare_type &c)
3885 {
3886 hashval_t ph = pointer_hash<void>::hash (reinterpret_cast<void *> (c.second));
3887 hashval_t nh = module_name_hash (c.first);
3888
3889 return iterative_hash_hashval_t (ph, nh);
3890 }
3891
3892 bool
3893 module_state_hash::equal (const value_type existing,
3894 const compare_type &candidate)
3895 {
3896 uintptr_t ep = (reinterpret_cast<uintptr_t> (existing->parent)
3897 | existing->is_partition ());
3898 if (ep != candidate.second)
3899 return false;
3900
3901 /* Identifier comparison is by pointer. If the string_csts happen
3902 to be the same object, then they're equal too. */
3903 if (existing->name == candidate.first)
3904 return true;
3905
3906 /* If neither are string csts, they can't be equal. */
3907 if (TREE_CODE (candidate.first) != STRING_CST
3908 || TREE_CODE (existing->name) != STRING_CST)
3909 return false;
3910
3911 /* String equality. */
3912 if (TREE_STRING_LENGTH (existing->name)
3913 == TREE_STRING_LENGTH (candidate.first)
3914 && !memcmp (TREE_STRING_POINTER (existing->name),
3915 TREE_STRING_POINTER (candidate.first),
3916 TREE_STRING_LENGTH (existing->name)))
3917 return true;
3918
3919 return false;
3920 }
3921
3922 /********************************************************************/
3923 /* Global state */
3924
3925 /* Mapper name. */
3926 static const char *module_mapper_name;
3927
3928 /* Deferred import queue (FIFO). */
3929 static vec<module_state *, va_heap, vl_embed> *pending_imports;
3930
3931 /* CMI repository path and workspace. */
3932 static char *cmi_repo;
3933 static size_t cmi_repo_length;
3934 static char *cmi_path;
3935 static size_t cmi_path_alloc;
3936
3937 /* Count of available and loaded clusters. */
3938 static unsigned available_clusters;
3939 static unsigned loaded_clusters;
3940
3941 /* What the current TU is. */
3942 unsigned module_kind;
3943
3944 /* Global trees. */
3945 static const std::pair<tree *, unsigned> global_tree_arys[] =
3946 {
3947 std::pair<tree *, unsigned> (sizetype_tab, stk_type_kind_last),
3948 std::pair<tree *, unsigned> (integer_types, itk_none),
3949 std::pair<tree *, unsigned> (global_trees, TI_MODULE_HWM),
3950 std::pair<tree *, unsigned> (c_global_trees, CTI_MODULE_HWM),
3951 std::pair<tree *, unsigned> (cp_global_trees, CPTI_MODULE_HWM),
3952 std::pair<tree *, unsigned> (NULL, 0)
3953 };
3954 static GTY(()) vec<tree, va_gc> *fixed_trees;
3955 static unsigned global_crc;
3956
3957 /* Lazy loading can open many files concurrently, there are
3958 per-process limits on that. We pay attention to the process limit,
3959 and attempt to increase it when we run out. Otherwise we use an
3960 LRU scheme to figure out who to flush. Note that if the import
3961 graph /depth/ exceeds lazy_limit, we'll exceed the limit. */
3962 static unsigned lazy_lru; /* LRU counter. */
3963 static unsigned lazy_open; /* Number of open modules */
3964 static unsigned lazy_limit; /* Current limit of open modules. */
3965 static unsigned lazy_hard_limit; /* Hard limit on open modules. */
3966 /* Account for source, assembler and dump files & directory searches.
3967 We don't keep the source file's open, so we don't have to account
3968 for #include depth. I think dump files are opened and closed per
3969 pass, but ICBW. */
3970 #define LAZY_HEADROOM 15 /* File descriptor headroom. */
3971
3972 /* Vector of module state. Indexed by OWNER. Has at least 2 slots. */
3973 static GTY(()) vec<module_state *, va_gc> *modules;
3974
3975 /* Hash of module state, findable by {name, parent}. */
3976 static GTY(()) hash_table<module_state_hash> *modules_hash;
3977
3978 /* Map of imported entities. We map DECL_UID to index of entity
3979 vector. */
3980 typedef hash_map<unsigned/*UID*/, unsigned/*index*/,
3981 simple_hashmap_traits<int_hash<unsigned,0>, unsigned>
3982 > entity_map_t;
3983 static entity_map_t *entity_map;
3984 /* Doesn't need GTYing, because any tree referenced here is also
3985 findable by, symbol table, specialization table, return type of
3986 reachable function. */
3987 static vec<binding_slot, va_heap, vl_embed> *entity_ary;
3988
3989 /* Members entities of imported classes that are defined in this TU.
3990 These are where the entity's context is not from the current TU.
3991 We need to emit the definition (but not the enclosing class).
3992
3993 We could find these by walking ALL the imported classes that we
3994 could provide a member definition. But that's expensive,
3995 especially when you consider lazy implicit member declarations,
3996 which could be ANY imported class. */
3997 static GTY(()) vec<tree, va_gc> *class_members;
3998
3999 /* The same problem exists for class template partial
4000 specializations. Now that we have constraints, the invariant of
4001 expecting them in the instantiation table no longer holds. One of
4002 the constrained partial specializations will be there, but the
4003 others not so much. It's not even an unconstrained partial
4004 spacialization in the table :( so any partial template declaration
4005 is added to this list too. */
4006 static GTY(()) vec<tree, va_gc> *partial_specializations;
4007
4008 /********************************************************************/
4009
4010 /* Our module mapper (created lazily). */
4011 module_client *mapper;
4012
4013 static module_client *make_mapper (location_t loc, class mkdeps *deps);
4014 inline module_client *get_mapper (location_t loc, class mkdeps *deps)
4015 {
4016 auto *res = mapper;
4017 if (!res)
4018 res = make_mapper (loc, deps);
4019 return res;
4020 }
4021
4022 /********************************************************************/
4023 static tree
4024 get_clone_target (tree decl)
4025 {
4026 tree target;
4027
4028 if (TREE_CODE (decl) == TEMPLATE_DECL)
4029 {
4030 tree res_orig = DECL_CLONED_FUNCTION (DECL_TEMPLATE_RESULT (decl));
4031
4032 target = DECL_TI_TEMPLATE (res_orig);
4033 }
4034 else
4035 target = DECL_CLONED_FUNCTION (decl);
4036
4037 gcc_checking_assert (DECL_MAYBE_IN_CHARGE_CDTOR_P (target));
4038
4039 return target;
4040 }
4041
4042 /* Like FOR_EACH_CLONE, but will walk cloned templates. */
4043 #define FOR_EVERY_CLONE(CLONE, FN) \
4044 if (!DECL_MAYBE_IN_CHARGE_CDTOR_P (FN)); \
4045 else \
4046 for (CLONE = DECL_CHAIN (FN); \
4047 CLONE && DECL_CLONED_FUNCTION_P (CLONE); \
4048 CLONE = DECL_CHAIN (CLONE))
4049
4050 /* It'd be nice if USE_TEMPLATE was a field of template_info
4051 (a) it'd solve the enum case dealt with below,
4052 (b) both class templates and decl templates would store this in the
4053 same place
4054 (c) this function wouldn't need the by-ref arg, which is annoying. */
4055
4056 static tree
4057 node_template_info (tree decl, int &use)
4058 {
4059 tree ti = NULL_TREE;
4060 int use_tpl = -1;
4061 if (DECL_IMPLICIT_TYPEDEF_P (decl))
4062 {
4063 tree type = TREE_TYPE (decl);
4064
4065 ti = TYPE_TEMPLATE_INFO (type);
4066 if (ti)
4067 {
4068 if (TYPE_LANG_SPECIFIC (type))
4069 use_tpl = CLASSTYPE_USE_TEMPLATE (type);
4070 else
4071 {
4072 /* An enum, where we don't explicitly encode use_tpl.
4073 If the containing context (a type or a function), is
4074 an ({im,ex}plicit) instantiation, then this is too.
4075 If it's a partial or explicit specialization, then
4076 this is not!. */
4077 tree ctx = CP_DECL_CONTEXT (decl);
4078 if (TYPE_P (ctx))
4079 ctx = TYPE_NAME (ctx);
4080 node_template_info (ctx, use);
4081 use_tpl = use != 2 ? use : 0;
4082 }
4083 }
4084 }
4085 else if (DECL_LANG_SPECIFIC (decl)
4086 && (VAR_P (decl)
4087 || TREE_CODE (decl) == TYPE_DECL
4088 || TREE_CODE (decl) == FUNCTION_DECL
4089 || TREE_CODE (decl) == FIELD_DECL
4090 || TREE_CODE (decl) == CONCEPT_DECL
4091 || TREE_CODE (decl) == TEMPLATE_DECL))
4092 {
4093 use_tpl = DECL_USE_TEMPLATE (decl);
4094 ti = DECL_TEMPLATE_INFO (decl);
4095 }
4096
4097 use = use_tpl;
4098 return ti;
4099 }
4100
4101 /* Find the index in entity_ary for an imported DECL. It should
4102 always be there, but bugs can cause it to be missing, and that can
4103 crash the crash reporting -- let's not do that! When streaming
4104 out we place entities from this module there too -- with negated
4105 indices. */
4106
4107 static unsigned
4108 import_entity_index (tree decl, bool null_ok = false)
4109 {
4110 if (unsigned *slot = entity_map->get (DECL_UID (decl)))
4111 return *slot;
4112
4113 gcc_checking_assert (null_ok);
4114 return ~(~0u >> 1);
4115 }
4116
4117 /* Find the module for an imported entity at INDEX in the entity ary.
4118 There must be one. */
4119
4120 static module_state *
4121 import_entity_module (unsigned index)
4122 {
4123 if (index > ~(~0u >> 1))
4124 /* This is an index for an exported entity. */
4125 return (*modules)[0];
4126
4127 /* Do not include the current TU (not an off-by-one error). */
4128 unsigned pos = 1;
4129 unsigned len = modules->length () - pos;
4130 while (len)
4131 {
4132 unsigned half = len / 2;
4133 module_state *probe = (*modules)[pos + half];
4134 if (index < probe->entity_lwm)
4135 len = half;
4136 else if (index < probe->entity_lwm + probe->entity_num)
4137 return probe;
4138 else
4139 {
4140 pos += half + 1;
4141 len = len - (half + 1);
4142 }
4143 }
4144 gcc_unreachable ();
4145 }
4146
4147
4148 /********************************************************************/
4149 /* A dumping machinery. */
4150
4151 class dumper {
4152 public:
4153 enum {
4154 LOCATION = TDF_LINENO, /* -lineno:Source location streaming. */
4155 DEPEND = TDF_GRAPH, /* -graph:Dependency graph construction. */
4156 CLUSTER = TDF_BLOCKS, /* -blocks:Clusters. */
4157 TREE = TDF_UID, /* -uid:Tree streaming. */
4158 MERGE = TDF_ALIAS, /* -alias:Mergeable Entities. */
4159 ELF = TDF_ASMNAME, /* -asmname:Elf data. */
4160 MACRO = TDF_VOPS /* -vops:Macros. */
4161 };
4162
4163 private:
4164 struct impl {
4165 typedef vec<module_state *, va_heap, vl_embed> stack_t;
4166
4167 FILE *stream; /* Dump stream. */
4168 unsigned indent; /* Local indentation. */
4169 bool bol; /* Beginning of line. */
4170 stack_t stack; /* Trailing array of module_state. */
4171
4172 bool nested_name (tree); /* Dump a name following DECL_CONTEXT. */
4173 };
4174
4175 public:
4176 /* The dumper. */
4177 impl *dumps;
4178 dump_flags_t flags;
4179
4180 public:
4181 /* Push/pop module state dumping. */
4182 unsigned push (module_state *);
4183 void pop (unsigned);
4184
4185 public:
4186 /* Change local indentation. */
4187 void indent ()
4188 {
4189 if (dumps)
4190 dumps->indent++;
4191 }
4192 void outdent ()
4193 {
4194 if (dumps)
4195 {
4196 gcc_checking_assert (dumps->indent);
4197 dumps->indent--;
4198 }
4199 }
4200
4201 public:
4202 /* Is dump enabled?. */
4203 bool operator () (int mask = 0)
4204 {
4205 if (!dumps || !dumps->stream)
4206 return false;
4207 if (mask && !(mask & flags))
4208 return false;
4209 return true;
4210 }
4211 /* Dump some information. */
4212 bool operator () (const char *, ...);
4213 };
4214
4215 /* The dumper. */
4216 static dumper dump = {0, dump_flags_t (0)};
4217
4218 /* Push to dumping M. Return previous indentation level. */
4219
4220 unsigned
4221 dumper::push (module_state *m)
4222 {
4223 FILE *stream = NULL;
4224 if (!dumps || !dumps->stack.length ())
4225 {
4226 stream = dump_begin (module_dump_id, &flags);
4227 if (!stream)
4228 return 0;
4229 }
4230
4231 if (!dumps || !dumps->stack.space (1))
4232 {
4233 /* Create or extend the dump implementor. */
4234 unsigned current = dumps ? dumps->stack.length () : 0;
4235 unsigned count = current ? current * 2 : EXPERIMENT (1, 20);
4236 size_t alloc = (offsetof (impl, stack)
4237 + impl::stack_t::embedded_size (count));
4238 dumps = XRESIZEVAR (impl, dumps, alloc);
4239 dumps->stack.embedded_init (count, current);
4240 }
4241 if (stream)
4242 dumps->stream = stream;
4243
4244 unsigned n = dumps->indent;
4245 dumps->indent = 0;
4246 dumps->bol = true;
4247 dumps->stack.quick_push (m);
4248 if (m)
4249 {
4250 module_state *from = NULL;
4251
4252 if (dumps->stack.length () > 1)
4253 from = dumps->stack[dumps->stack.length () - 2];
4254 else
4255 dump ("");
4256 dump (from ? "Starting module %M (from %M)"
4257 : "Starting module %M", m, from);
4258 }
4259
4260 return n;
4261 }
4262
4263 /* Pop from dumping. Restore indentation to N. */
4264
4265 void dumper::pop (unsigned n)
4266 {
4267 if (!dumps)
4268 return;
4269
4270 gcc_checking_assert (dump () && !dumps->indent);
4271 if (module_state *m = dumps->stack[dumps->stack.length () - 1])
4272 {
4273 module_state *from = (dumps->stack.length () > 1
4274 ? dumps->stack[dumps->stack.length () - 2] : NULL);
4275 dump (from ? "Finishing module %M (returning to %M)"
4276 : "Finishing module %M", m, from);
4277 }
4278 dumps->stack.pop ();
4279 dumps->indent = n;
4280 if (!dumps->stack.length ())
4281 {
4282 dump_end (module_dump_id, dumps->stream);
4283 dumps->stream = NULL;
4284 }
4285 }
4286
4287 /* Dump a nested name for arbitrary tree T. Sometimes it won't have a
4288 name. */
4289
4290 bool
4291 dumper::impl::nested_name (tree t)
4292 {
4293 tree ti = NULL_TREE;
4294 int origin = -1;
4295 tree name = NULL_TREE;
4296
4297 if (t && TREE_CODE (t) == TREE_BINFO)
4298 t = BINFO_TYPE (t);
4299
4300 if (t && TYPE_P (t))
4301 t = TYPE_NAME (t);
4302
4303 if (t && DECL_P (t))
4304 {
4305 if (t == global_namespace || DECL_TEMPLATE_PARM_P (t))
4306 ;
4307 else if (tree ctx = DECL_CONTEXT (t))
4308 if (TREE_CODE (ctx) == TRANSLATION_UNIT_DECL
4309 || nested_name (ctx))
4310 fputs ("::", stream);
4311
4312 int use_tpl;
4313 ti = node_template_info (t, use_tpl);
4314 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
4315 && (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == t))
4316 t = TI_TEMPLATE (ti);
4317 tree not_tmpl = t;
4318 if (TREE_CODE (t) == TEMPLATE_DECL)
4319 {
4320 fputs ("template ", stream);
4321 not_tmpl = DECL_TEMPLATE_RESULT (t);
4322 }
4323
4324 if (not_tmpl
4325 && DECL_P (not_tmpl)
4326 && DECL_LANG_SPECIFIC (not_tmpl)
4327 && DECL_MODULE_IMPORT_P (not_tmpl))
4328 {
4329 /* We need to be careful here, so as to not explode on
4330 inconsistent data -- we're probably debugging, because
4331 Something Is Wrong. */
4332 unsigned index = import_entity_index (t, true);
4333 if (!(index & ~(~0u >> 1)))
4334 origin = import_entity_module (index)->mod;
4335 else if (index > ~(~0u >> 1))
4336 /* An imported partition member that we're emitting. */
4337 origin = 0;
4338 else
4339 origin = -2;
4340 }
4341
4342 name = DECL_NAME (t) ? DECL_NAME (t)
4343 : HAS_DECL_ASSEMBLER_NAME_P (t) ? DECL_ASSEMBLER_NAME_RAW (t)
4344 : NULL_TREE;
4345 }
4346 else
4347 name = t;
4348
4349 if (name)
4350 switch (TREE_CODE (name))
4351 {
4352 default:
4353 fputs ("#unnamed#", stream);
4354 break;
4355
4356 case IDENTIFIER_NODE:
4357 fwrite (IDENTIFIER_POINTER (name), 1, IDENTIFIER_LENGTH (name), stream);
4358 break;
4359
4360 case INTEGER_CST:
4361 print_hex (wi::to_wide (name), stream);
4362 break;
4363
4364 case STRING_CST:
4365 /* If TREE_TYPE is NULL, this is a raw string. */
4366 fwrite (TREE_STRING_POINTER (name), 1,
4367 TREE_STRING_LENGTH (name) - (TREE_TYPE (name) != NULL_TREE),
4368 stream);
4369 break;
4370 }
4371 else
4372 fputs ("#null#", stream);
4373
4374 if (origin >= 0)
4375 {
4376 const module_state *module = (*modules)[origin];
4377 fprintf (stream, "@%s:%d", !module ? "" : !module->name ? "(unnamed)"
4378 : module->get_flatname (), origin);
4379 }
4380 else if (origin == -2)
4381 fprintf (stream, "@???");
4382
4383 if (ti)
4384 {
4385 tree args = INNERMOST_TEMPLATE_ARGS (TI_ARGS (ti));
4386 fputs ("<", stream);
4387 if (args)
4388 for (int ix = 0; ix != TREE_VEC_LENGTH (args); ix++)
4389 {
4390 if (ix)
4391 fputs (",", stream);
4392 nested_name (TREE_VEC_ELT (args, ix));
4393 }
4394 fputs (">", stream);
4395 }
4396
4397 return true;
4398 }
4399
4400 /* Formatted dumping. FORMAT begins with '+' do not emit a trailing
4401 new line. (Normally it is appended.)
4402 Escapes:
4403 %C - tree_code
4404 %I - identifier
4405 %M - module_state
4406 %N - name -- DECL_NAME
4407 %P - context:name pair
4408 %R - unsigned:unsigned ratio
4409 %S - symbol -- DECL_ASSEMBLER_NAME
4410 %U - long unsigned
4411 %V - version
4412 --- the following are printf-like, but without its flexibility
4413 %d - decimal int
4414 %p - pointer
4415 %s - string
4416 %u - unsigned int
4417 %x - hex int
4418
4419 We do not implement the printf modifiers. */
4420
4421 bool
4422 dumper::operator () (const char *format, ...)
4423 {
4424 if (!(*this) ())
4425 return false;
4426
4427 bool no_nl = format[0] == '+';
4428 format += no_nl;
4429
4430 if (dumps->bol)
4431 {
4432 /* Module import indent. */
4433 if (unsigned depth = dumps->stack.length () - 1)
4434 {
4435 const char *prefix = ">>>>";
4436 fprintf (dumps->stream, (depth <= strlen (prefix)
4437 ? &prefix[strlen (prefix) - depth]
4438 : ">.%d.>"), depth);
4439 }
4440
4441 /* Local indent. */
4442 if (unsigned indent = dumps->indent)
4443 {
4444 const char *prefix = " ";
4445 fprintf (dumps->stream, (indent <= strlen (prefix)
4446 ? &prefix[strlen (prefix) - indent]
4447 : " .%d. "), indent);
4448 }
4449 dumps->bol = false;
4450 }
4451
4452 va_list args;
4453 va_start (args, format);
4454 while (const char *esc = strchr (format, '%'))
4455 {
4456 fwrite (format, 1, (size_t)(esc - format), dumps->stream);
4457 format = ++esc;
4458 switch (*format++)
4459 {
4460 default:
4461 gcc_unreachable ();
4462
4463 case '%':
4464 fputc ('%', dumps->stream);
4465 break;
4466
4467 case 'C': /* Code */
4468 {
4469 tree_code code = (tree_code)va_arg (args, unsigned);
4470 fputs (get_tree_code_name (code), dumps->stream);
4471 }
4472 break;
4473
4474 case 'I': /* Identifier. */
4475 {
4476 tree t = va_arg (args, tree);
4477 dumps->nested_name (t);
4478 }
4479 break;
4480
4481 case 'M': /* Module. */
4482 {
4483 const char *str = "(none)";
4484 if (module_state *m = va_arg (args, module_state *))
4485 {
4486 if (!m->has_location ())
4487 str = "(detached)";
4488 else
4489 str = m->get_flatname ();
4490 }
4491 fputs (str, dumps->stream);
4492 }
4493 break;
4494
4495 case 'N': /* Name. */
4496 {
4497 tree t = va_arg (args, tree);
4498 while (t && TREE_CODE (t) == OVERLOAD)
4499 t = OVL_FUNCTION (t);
4500 fputc ('\'', dumps->stream);
4501 dumps->nested_name (t);
4502 fputc ('\'', dumps->stream);
4503 }
4504 break;
4505
4506 case 'P': /* Pair. */
4507 {
4508 tree ctx = va_arg (args, tree);
4509 tree name = va_arg (args, tree);
4510 fputc ('\'', dumps->stream);
4511 dumps->nested_name (ctx);
4512 if (ctx && ctx != global_namespace)
4513 fputs ("::", dumps->stream);
4514 dumps->nested_name (name);
4515 fputc ('\'', dumps->stream);
4516 }
4517 break;
4518
4519 case 'R': /* Ratio */
4520 {
4521 unsigned a = va_arg (args, unsigned);
4522 unsigned b = va_arg (args, unsigned);
4523 fprintf (dumps->stream, "%.1f", (float) a / (b + !b));
4524 }
4525 break;
4526
4527 case 'S': /* Symbol name */
4528 {
4529 tree t = va_arg (args, tree);
4530 if (t && TYPE_P (t))
4531 t = TYPE_NAME (t);
4532 if (t && HAS_DECL_ASSEMBLER_NAME_P (t)
4533 && DECL_ASSEMBLER_NAME_SET_P (t))
4534 {
4535 fputc ('(', dumps->stream);
4536 fputs (IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME (t)),
4537 dumps->stream);
4538 fputc (')', dumps->stream);
4539 }
4540 }
4541 break;
4542
4543 case 'U': /* long unsigned. */
4544 {
4545 unsigned long u = va_arg (args, unsigned long);
4546 fprintf (dumps->stream, "%lu", u);
4547 }
4548 break;
4549
4550 case 'V': /* Verson. */
4551 {
4552 unsigned v = va_arg (args, unsigned);
4553 verstr_t string;
4554
4555 version2string (v, string);
4556 fputs (string, dumps->stream);
4557 }
4558 break;
4559
4560 case 'c': /* Character. */
4561 {
4562 int c = va_arg (args, int);
4563 fputc (c, dumps->stream);
4564 }
4565 break;
4566
4567 case 'd': /* Decimal Int. */
4568 {
4569 int d = va_arg (args, int);
4570 fprintf (dumps->stream, "%d", d);
4571 }
4572 break;
4573
4574 case 'p': /* Pointer. */
4575 {
4576 void *p = va_arg (args, void *);
4577 fprintf (dumps->stream, "%p", p);
4578 }
4579 break;
4580
4581 case 's': /* String. */
4582 {
4583 const char *s = va_arg (args, char *);
4584 gcc_checking_assert (s);
4585 fputs (s, dumps->stream);
4586 }
4587 break;
4588
4589 case 'u': /* Unsigned. */
4590 {
4591 unsigned u = va_arg (args, unsigned);
4592 fprintf (dumps->stream, "%u", u);
4593 }
4594 break;
4595
4596 case 'x': /* Hex. */
4597 {
4598 unsigned x = va_arg (args, unsigned);
4599 fprintf (dumps->stream, "%x", x);
4600 }
4601 break;
4602 }
4603 }
4604 fputs (format, dumps->stream);
4605 va_end (args);
4606 if (!no_nl)
4607 {
4608 dumps->bol = true;
4609 fputc ('\n', dumps->stream);
4610 }
4611 return true;
4612 }
4613
4614 struct note_def_cache_hasher : ggc_cache_ptr_hash<tree_node>
4615 {
4616 static int keep_cache_entry (tree t)
4617 {
4618 if (!CHECKING_P)
4619 /* GTY is unfortunately not clever enough to conditionalize
4620 this. */
4621 gcc_unreachable ();
4622
4623 if (ggc_marked_p (t))
4624 return -1;
4625
4626 unsigned n = dump.push (NULL);
4627 /* This might or might not be an error. We should note its
4628 dropping whichever. */
4629 dump () && dump ("Dropping %N from note_defs table", t);
4630 dump.pop (n);
4631
4632 return 0;
4633 }
4634 };
4635
4636 /* We should stream each definition at most once.
4637 This needs to be a cache because there are cases where a definition
4638 ends up being not retained, and we need to drop those so we don't
4639 get confused if memory is reallocated. */
4640 typedef hash_table<note_def_cache_hasher> note_defs_table_t;
4641 static GTY((cache)) note_defs_table_t *note_defs;
4642
4643 void
4644 trees_in::assert_definition (tree decl ATTRIBUTE_UNUSED,
4645 bool installing ATTRIBUTE_UNUSED)
4646 {
4647 #if CHECKING_P
4648 tree *slot = note_defs->find_slot (decl, installing ? INSERT : NO_INSERT);
4649 tree not_tmpl = STRIP_TEMPLATE (decl);
4650 if (installing)
4651 {
4652 /* We must be inserting for the first time. */
4653 gcc_assert (!*slot);
4654 *slot = decl;
4655 }
4656 else
4657 /* If this is not the mergeable entity, it should not be in the
4658 table. If it is a non-global-module mergeable entity, it
4659 should be in the table. Global module entities could have been
4660 defined textually in the current TU and so might or might not
4661 be present. */
4662 gcc_assert (!is_duplicate (decl)
4663 ? !slot
4664 : (slot
4665 || !DECL_LANG_SPECIFIC (not_tmpl)
4666 || !DECL_MODULE_PURVIEW_P (not_tmpl)
4667 || (!DECL_MODULE_IMPORT_P (not_tmpl)
4668 && header_module_p ())));
4669
4670 if (not_tmpl != decl)
4671 gcc_assert (!note_defs->find_slot (not_tmpl, NO_INSERT));
4672 #endif
4673 }
4674
4675 void
4676 trees_out::assert_definition (tree decl ATTRIBUTE_UNUSED)
4677 {
4678 #if CHECKING_P
4679 tree *slot = note_defs->find_slot (decl, INSERT);
4680 gcc_assert (!*slot);
4681 *slot = decl;
4682 if (TREE_CODE (decl) == TEMPLATE_DECL)
4683 gcc_assert (!note_defs->find_slot (DECL_TEMPLATE_RESULT (decl), NO_INSERT));
4684 #endif
4685 }
4686
4687 /********************************************************************/
4688 static bool
4689 noisy_p ()
4690 {
4691 if (quiet_flag)
4692 return false;
4693
4694 pp_needs_newline (global_dc->printer) = true;
4695 diagnostic_set_last_function (global_dc, (diagnostic_info *) NULL);
4696
4697 return true;
4698 }
4699
4700 /* Set the cmi repo. Strip trailing '/', '.' becomes NULL. */
4701
4702 static void
4703 set_cmi_repo (const char *r)
4704 {
4705 XDELETEVEC (cmi_repo);
4706 XDELETEVEC (cmi_path);
4707 cmi_path_alloc = 0;
4708
4709 cmi_repo = NULL;
4710 cmi_repo_length = 0;
4711
4712 if (!r || !r[0])
4713 return;
4714
4715 size_t len = strlen (r);
4716 cmi_repo = XNEWVEC (char, len + 1);
4717 memcpy (cmi_repo, r, len + 1);
4718
4719 if (len > 1 && IS_DIR_SEPARATOR (cmi_repo[len-1]))
4720 len--;
4721 if (len == 1 && cmi_repo[0] == '.')
4722 len--;
4723 cmi_repo[len] = 0;
4724 cmi_repo_length = len;
4725 }
4726
4727 /* TO is a repo-relative name. Provide one that we may use from where
4728 we are. */
4729
4730 static const char *
4731 maybe_add_cmi_prefix (const char *to, size_t *len_p = NULL)
4732 {
4733 size_t len = len_p || cmi_repo_length ? strlen (to) : 0;
4734
4735 if (cmi_repo_length && !IS_ABSOLUTE_PATH (to))
4736 {
4737 if (cmi_path_alloc < cmi_repo_length + len + 2)
4738 {
4739 XDELETEVEC (cmi_path);
4740 cmi_path_alloc = cmi_repo_length + len * 2 + 2;
4741 cmi_path = XNEWVEC (char, cmi_path_alloc);
4742
4743 memcpy (cmi_path, cmi_repo, cmi_repo_length);
4744 cmi_path[cmi_repo_length] = DIR_SEPARATOR;
4745 }
4746
4747 memcpy (&cmi_path[cmi_repo_length + 1], to, len + 1);
4748 len += cmi_repo_length + 1;
4749 to = cmi_path;
4750 }
4751
4752 if (len_p)
4753 *len_p = len;
4754
4755 return to;
4756 }
4757
4758 /* Try and create the directories of PATH. */
4759
4760 static void
4761 create_dirs (char *path)
4762 {
4763 /* Try and create the missing directories. */
4764 for (char *base = path; *base; base++)
4765 if (IS_DIR_SEPARATOR (*base))
4766 {
4767 char sep = *base;
4768 *base = 0;
4769 int failed = mkdir (path, S_IRWXU | S_IRWXG | S_IRWXO);
4770 dump () && dump ("Mkdir ('%s') errno:=%u", path, failed ? errno : 0);
4771 *base = sep;
4772 if (failed
4773 /* Maybe racing with another creator (of a *different*
4774 module). */
4775 && errno != EEXIST)
4776 break;
4777 }
4778 }
4779
4780 /* Given a CLASSTYPE_DECL_LIST VALUE get the template friend decl,
4781 if that's what this is. */
4782
4783 static tree
4784 friend_from_decl_list (tree frnd)
4785 {
4786 tree res = frnd;
4787
4788 if (TREE_CODE (frnd) != TEMPLATE_DECL)
4789 {
4790 tree tmpl = NULL_TREE;
4791 if (TYPE_P (frnd))
4792 {
4793 res = TYPE_NAME (frnd);
4794 if (CLASS_TYPE_P (frnd)
4795 && CLASSTYPE_TEMPLATE_INFO (frnd))
4796 tmpl = CLASSTYPE_TI_TEMPLATE (frnd);
4797 }
4798 else if (DECL_TEMPLATE_INFO (frnd))
4799 {
4800 tmpl = DECL_TI_TEMPLATE (frnd);
4801 if (TREE_CODE (tmpl) != TEMPLATE_DECL)
4802 tmpl = NULL_TREE;
4803 }
4804
4805 if (tmpl && DECL_TEMPLATE_RESULT (tmpl) == res)
4806 res = tmpl;
4807 }
4808
4809 return res;
4810 }
4811
4812 static tree
4813 find_enum_member (tree ctx, tree name)
4814 {
4815 for (tree values = TYPE_VALUES (ctx);
4816 values; values = TREE_CHAIN (values))
4817 if (DECL_NAME (TREE_VALUE (values)) == name)
4818 return TREE_VALUE (values);
4819
4820 return NULL_TREE;
4821 }
4822
4823 /********************************************************************/
4824 /* Instrumentation gathered writing bytes. */
4825
4826 void
4827 bytes_out::instrument ()
4828 {
4829 dump ("Wrote %u bytes in %u blocks", lengths[3], spans[3]);
4830 dump ("Wrote %u bits in %u bytes", lengths[0] + lengths[1], lengths[2]);
4831 for (unsigned ix = 0; ix < 2; ix++)
4832 dump (" %u %s spans of %R bits", spans[ix],
4833 ix ? "one" : "zero", lengths[ix], spans[ix]);
4834 dump (" %u blocks with %R bits padding", spans[2],
4835 lengths[2] * 8 - (lengths[0] + lengths[1]), spans[2]);
4836 }
4837
4838 /* Instrumentation gathered writing trees. */
4839 void
4840 trees_out::instrument ()
4841 {
4842 if (dump (""))
4843 {
4844 bytes_out::instrument ();
4845 dump ("Wrote:");
4846 dump (" %u decl trees", decl_val_count);
4847 dump (" %u other trees", tree_val_count);
4848 dump (" %u back references", back_ref_count);
4849 dump (" %u null trees", null_count);
4850 }
4851 }
4852
4853 /* Setup and teardown for a tree walk. */
4854
4855 void
4856 trees_out::begin ()
4857 {
4858 gcc_assert (!streaming_p () || !tree_map.elements ());
4859
4860 mark_trees ();
4861 if (streaming_p ())
4862 parent::begin ();
4863 }
4864
4865 unsigned
4866 trees_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
4867 {
4868 gcc_checking_assert (streaming_p ());
4869
4870 unmark_trees ();
4871 return parent::end (sink, name, crc_ptr);
4872 }
4873
4874 void
4875 trees_out::end ()
4876 {
4877 gcc_assert (!streaming_p ());
4878
4879 unmark_trees ();
4880 /* Do not parent::end -- we weren't streaming. */
4881 }
4882
4883 void
4884 trees_out::mark_trees ()
4885 {
4886 if (size_t size = tree_map.elements ())
4887 {
4888 /* This isn't our first rodeo, destroy and recreate the
4889 tree_map. I'm a bad bad man. Use the previous size as a
4890 guess for the next one (so not all bad). */
4891 tree_map.~ptr_int_hash_map ();
4892 new (&tree_map) ptr_int_hash_map (size);
4893 }
4894
4895 /* Install the fixed trees, with +ve references. */
4896 unsigned limit = fixed_trees->length ();
4897 for (unsigned ix = 0; ix != limit; ix++)
4898 {
4899 tree val = (*fixed_trees)[ix];
4900 bool existed = tree_map.put (val, ix + tag_fixed);
4901 gcc_checking_assert (!TREE_VISITED (val) && !existed);
4902 TREE_VISITED (val) = true;
4903 }
4904
4905 ref_num = 0;
4906 }
4907
4908 /* Unmark the trees we encountered */
4909
4910 void
4911 trees_out::unmark_trees ()
4912 {
4913 ptr_int_hash_map::iterator end (tree_map.end ());
4914 for (ptr_int_hash_map::iterator iter (tree_map.begin ()); iter != end; ++iter)
4915 {
4916 tree node = reinterpret_cast<tree> ((*iter).first);
4917 int ref = (*iter).second;
4918 /* We should have visited the node, and converted its mergeable
4919 reference to a regular reference. */
4920 gcc_checking_assert (TREE_VISITED (node)
4921 && (ref <= tag_backref || ref >= tag_fixed));
4922 TREE_VISITED (node) = false;
4923 }
4924 }
4925
4926 /* Mark DECL for by-value walking. We do this by inserting it into
4927 the tree map with a reference of zero. May be called multiple
4928 times on the same node. */
4929
4930 void
4931 trees_out::mark_by_value (tree decl)
4932 {
4933 gcc_checking_assert (DECL_P (decl)
4934 /* Enum consts are INTEGER_CSTS. */
4935 || TREE_CODE (decl) == INTEGER_CST
4936 || TREE_CODE (decl) == TREE_BINFO);
4937
4938 if (TREE_VISITED (decl))
4939 /* Must already be forced or fixed. */
4940 gcc_checking_assert (*tree_map.get (decl) >= tag_value);
4941 else
4942 {
4943 bool existed = tree_map.put (decl, tag_value);
4944 gcc_checking_assert (!existed);
4945 TREE_VISITED (decl) = true;
4946 }
4947 }
4948
4949 int
4950 trees_out::get_tag (tree t)
4951 {
4952 gcc_checking_assert (TREE_VISITED (t));
4953 return *tree_map.get (t);
4954 }
4955
4956 /* Insert T into the map, return its tag number. */
4957
4958 int
4959 trees_out::insert (tree t, walk_kind walk)
4960 {
4961 gcc_checking_assert (walk != WK_normal || !TREE_VISITED (t));
4962 int tag = --ref_num;
4963 bool existed;
4964 int &slot = tree_map.get_or_insert (t, &existed);
4965 gcc_checking_assert (TREE_VISITED (t) == existed
4966 && (!existed
4967 || (walk == WK_value && slot == tag_value)));
4968 TREE_VISITED (t) = true;
4969 slot = tag;
4970
4971 return tag;
4972 }
4973
4974 /* Insert T into the backreference array. Return its back reference
4975 number. */
4976
4977 int
4978 trees_in::insert (tree t)
4979 {
4980 gcc_checking_assert (t || get_overrun ());
4981 back_refs.safe_push (t);
4982 return -(int)back_refs.length ();
4983 }
4984
4985 /* A chained set of decls. */
4986
4987 void
4988 trees_out::chained_decls (tree decls)
4989 {
4990 for (; decls; decls = DECL_CHAIN (decls))
4991 tree_node (decls);
4992 tree_node (NULL_TREE);
4993 }
4994
4995 tree
4996 trees_in::chained_decls ()
4997 {
4998 tree decls = NULL_TREE;
4999 for (tree *chain = &decls;;)
5000 if (tree decl = tree_node ())
5001 {
5002 if (!DECL_P (decl) || DECL_CHAIN (decl))
5003 {
5004 set_overrun ();
5005 break;
5006 }
5007 *chain = decl;
5008 chain = &DECL_CHAIN (decl);
5009 }
5010 else
5011 break;
5012
5013 return decls;
5014 }
5015
5016 /* A vector of decls following DECL_CHAIN. */
5017
5018 void
5019 trees_out::vec_chained_decls (tree decls)
5020 {
5021 if (streaming_p ())
5022 {
5023 unsigned len = 0;
5024
5025 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
5026 len++;
5027 u (len);
5028 }
5029
5030 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
5031 {
5032 if (DECL_IMPLICIT_TYPEDEF_P (decl)
5033 && TYPE_NAME (TREE_TYPE (decl)) != decl)
5034 /* An anonynmous struct with a typedef name. An odd thing to
5035 write. */
5036 tree_node (NULL_TREE);
5037 else
5038 tree_node (decl);
5039 }
5040 }
5041
5042 vec<tree, va_heap> *
5043 trees_in::vec_chained_decls ()
5044 {
5045 vec<tree, va_heap> *v = NULL;
5046
5047 if (unsigned len = u ())
5048 {
5049 vec_alloc (v, len);
5050
5051 for (unsigned ix = 0; ix < len; ix++)
5052 {
5053 tree decl = tree_node ();
5054 if (decl && !DECL_P (decl))
5055 {
5056 set_overrun ();
5057 break;
5058 }
5059 v->quick_push (decl);
5060 }
5061
5062 if (get_overrun ())
5063 {
5064 vec_free (v);
5065 v = NULL;
5066 }
5067 }
5068
5069 return v;
5070 }
5071
5072 /* A vector of trees. */
5073
5074 void
5075 trees_out::tree_vec (vec<tree, va_gc> *v)
5076 {
5077 unsigned len = vec_safe_length (v);
5078 if (streaming_p ())
5079 u (len);
5080 for (unsigned ix = 0; ix != len; ix++)
5081 tree_node ((*v)[ix]);
5082 }
5083
5084 vec<tree, va_gc> *
5085 trees_in::tree_vec ()
5086 {
5087 vec<tree, va_gc> *v = NULL;
5088 if (unsigned len = u ())
5089 {
5090 vec_alloc (v, len);
5091 for (unsigned ix = 0; ix != len; ix++)
5092 v->quick_push (tree_node ());
5093 }
5094 return v;
5095 }
5096
5097 /* A vector of tree pairs. */
5098
5099 void
5100 trees_out::tree_pair_vec (vec<tree_pair_s, va_gc> *v)
5101 {
5102 unsigned len = vec_safe_length (v);
5103 if (streaming_p ())
5104 u (len);
5105 if (len)
5106 for (unsigned ix = 0; ix != len; ix++)
5107 {
5108 tree_pair_s const &s = (*v)[ix];
5109 tree_node (s.purpose);
5110 tree_node (s.value);
5111 }
5112 }
5113
5114 vec<tree_pair_s, va_gc> *
5115 trees_in::tree_pair_vec ()
5116 {
5117 vec<tree_pair_s, va_gc> *v = NULL;
5118 if (unsigned len = u ())
5119 {
5120 vec_alloc (v, len);
5121 for (unsigned ix = 0; ix != len; ix++)
5122 {
5123 tree_pair_s s;
5124 s.purpose = tree_node ();
5125 s.value = tree_node ();
5126 v->quick_push (s);
5127 }
5128 }
5129 return v;
5130 }
5131
5132 void
5133 trees_out::tree_list (tree list, bool has_purpose)
5134 {
5135 for (; list; list = TREE_CHAIN (list))
5136 {
5137 gcc_checking_assert (TREE_VALUE (list));
5138 tree_node (TREE_VALUE (list));
5139 if (has_purpose)
5140 tree_node (TREE_PURPOSE (list));
5141 }
5142 tree_node (NULL_TREE);
5143 }
5144
5145 tree
5146 trees_in::tree_list (bool has_purpose)
5147 {
5148 tree res = NULL_TREE;
5149
5150 for (tree *chain = &res; tree value = tree_node ();
5151 chain = &TREE_CHAIN (*chain))
5152 {
5153 tree purpose = has_purpose ? tree_node () : NULL_TREE;
5154 *chain = build_tree_list (purpose, value);
5155 }
5156
5157 return res;
5158 }
5159 /* Start tree write. Write information to allocate the receiving
5160 node. */
5161
5162 void
5163 trees_out::start (tree t, bool code_streamed)
5164 {
5165 if (TYPE_P (t))
5166 {
5167 enum tree_code code = TREE_CODE (t);
5168 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
5169 /* All these types are TYPE_NON_COMMON. */
5170 gcc_checking_assert (code == RECORD_TYPE
5171 || code == UNION_TYPE
5172 || code == ENUMERAL_TYPE
5173 || code == TEMPLATE_TYPE_PARM
5174 || code == TEMPLATE_TEMPLATE_PARM
5175 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
5176 }
5177
5178 if (!code_streamed)
5179 u (TREE_CODE (t));
5180
5181 switch (TREE_CODE (t))
5182 {
5183 default:
5184 if (VL_EXP_CLASS_P (t))
5185 u (VL_EXP_OPERAND_LENGTH (t));
5186 break;
5187
5188 case INTEGER_CST:
5189 u (TREE_INT_CST_NUNITS (t));
5190 u (TREE_INT_CST_EXT_NUNITS (t));
5191 break;
5192
5193 case OMP_CLAUSE:
5194 state->extensions |= SE_OPENMP;
5195 u (OMP_CLAUSE_CODE (t));
5196 break;
5197
5198 case STRING_CST:
5199 str (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t));
5200 break;
5201
5202 case VECTOR_CST:
5203 u (VECTOR_CST_LOG2_NPATTERNS (t));
5204 u (VECTOR_CST_NELTS_PER_PATTERN (t));
5205 break;
5206
5207 case TREE_BINFO:
5208 u (BINFO_N_BASE_BINFOS (t));
5209 break;
5210
5211 case TREE_VEC:
5212 u (TREE_VEC_LENGTH (t));
5213 break;
5214
5215 case FIXED_CST:
5216 gcc_unreachable (); /* Not supported in C++. */
5217 break;
5218
5219 case IDENTIFIER_NODE:
5220 case SSA_NAME:
5221 case TARGET_MEM_REF:
5222 case TRANSLATION_UNIT_DECL:
5223 /* We shouldn't meet these. */
5224 gcc_unreachable ();
5225 break;
5226 }
5227 }
5228
5229 /* Start tree read. Allocate the receiving node. */
5230
5231 tree
5232 trees_in::start (unsigned code)
5233 {
5234 tree t = NULL_TREE;
5235
5236 if (!code)
5237 code = u ();
5238
5239 switch (code)
5240 {
5241 default:
5242 if (code >= MAX_TREE_CODES)
5243 {
5244 fail:
5245 set_overrun ();
5246 return NULL_TREE;
5247 }
5248 else if (TREE_CODE_CLASS (code) == tcc_vl_exp)
5249 {
5250 unsigned ops = u ();
5251 t = build_vl_exp (tree_code (code), ops);
5252 }
5253 else
5254 t = make_node (tree_code (code));
5255 break;
5256
5257 case INTEGER_CST:
5258 {
5259 unsigned n = u ();
5260 unsigned e = u ();
5261 t = make_int_cst (n, e);
5262 }
5263 break;
5264
5265 case OMP_CLAUSE:
5266 {
5267 if (!(state->extensions & SE_OPENMP))
5268 goto fail;
5269
5270 unsigned omp_code = u ();
5271 t = build_omp_clause (UNKNOWN_LOCATION, omp_clause_code (omp_code));
5272 }
5273 break;
5274
5275 case STRING_CST:
5276 {
5277 size_t l;
5278 const char *chars = str (&l);
5279 t = build_string (l, chars);
5280 }
5281 break;
5282
5283 case VECTOR_CST:
5284 {
5285 unsigned log2_npats = u ();
5286 unsigned elts_per = u ();
5287 t = make_vector (log2_npats, elts_per);
5288 }
5289 break;
5290
5291 case TREE_BINFO:
5292 t = make_tree_binfo (u ());
5293 break;
5294
5295 case TREE_VEC:
5296 t = make_tree_vec (u ());
5297 break;
5298
5299 case FIXED_CST:
5300 case IDENTIFIER_NODE:
5301 case SSA_NAME:
5302 case TARGET_MEM_REF:
5303 case TRANSLATION_UNIT_DECL:
5304 goto fail;
5305 }
5306
5307 return t;
5308 }
5309
5310 /* The structure streamers access the raw fields, because the
5311 alternative, of using the accessor macros can require using
5312 different accessors for the same underlying field, depending on the
5313 tree code. That's both confusing and annoying. */
5314
5315 /* Read & write the core boolean flags. */
5316
5317 void
5318 trees_out::core_bools (tree t, bits_out& bits)
5319 {
5320 #define WB(X) (bits.b (X))
5321 /* Stream X if COND holds, and if !COND stream a dummy value so that the
5322 overall number of bits streamed is independent of the runtime value
5323 of COND, which allows the compiler to better optimize this function. */
5324 #define WB_IF(COND, X) WB ((COND) ? (X) : false)
5325 tree_code code = TREE_CODE (t);
5326
5327 WB (t->base.side_effects_flag);
5328 WB (t->base.constant_flag);
5329 WB (t->base.addressable_flag);
5330 WB (t->base.volatile_flag);
5331 WB (t->base.readonly_flag);
5332 /* base.asm_written_flag is a property of the current TU's use of
5333 this decl. */
5334 WB (t->base.nowarning_flag);
5335 /* base.visited read as zero (it's set for writer, because that's
5336 how we mark nodes). */
5337 /* base.used_flag is not streamed. Readers may set TREE_USED of
5338 decls they use. */
5339 WB (t->base.nothrow_flag);
5340 WB (t->base.static_flag);
5341 /* This is TYPE_CACHED_VALUES_P for types. */
5342 WB_IF (TREE_CODE_CLASS (code) != tcc_type, t->base.public_flag);
5343 WB (t->base.private_flag);
5344 WB (t->base.protected_flag);
5345 WB (t->base.deprecated_flag);
5346 WB (t->base.default_def_flag);
5347
5348 switch (code)
5349 {
5350 case CALL_EXPR:
5351 case INTEGER_CST:
5352 case SSA_NAME:
5353 case TARGET_MEM_REF:
5354 case TREE_VEC:
5355 /* These use different base.u fields. */
5356 return;
5357
5358 default:
5359 WB (t->base.u.bits.lang_flag_0);
5360 bool flag_1 = t->base.u.bits.lang_flag_1;
5361 if (!flag_1)
5362 ;
5363 else if (code == TEMPLATE_INFO)
5364 /* This is TI_PENDING_TEMPLATE_FLAG, not relevant to reader. */
5365 flag_1 = false;
5366 else if (code == VAR_DECL)
5367 {
5368 /* This is DECL_INITIALIZED_P. */
5369 if (TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
5370 /* We'll set this when reading the definition. */
5371 flag_1 = false;
5372 }
5373 WB (flag_1);
5374 WB (t->base.u.bits.lang_flag_2);
5375 WB (t->base.u.bits.lang_flag_3);
5376 WB (t->base.u.bits.lang_flag_4);
5377 WB (t->base.u.bits.lang_flag_5);
5378 WB (t->base.u.bits.lang_flag_6);
5379 WB (t->base.u.bits.saturating_flag);
5380 WB (t->base.u.bits.unsigned_flag);
5381 WB (t->base.u.bits.packed_flag);
5382 WB (t->base.u.bits.user_align);
5383 WB (t->base.u.bits.nameless_flag);
5384 WB (t->base.u.bits.atomic_flag);
5385 WB (t->base.u.bits.unavailable_flag);
5386 break;
5387 }
5388
5389 if (TREE_CODE_CLASS (code) == tcc_type)
5390 {
5391 WB (t->type_common.no_force_blk_flag);
5392 WB (t->type_common.needs_constructing_flag);
5393 WB (t->type_common.transparent_aggr_flag);
5394 WB (t->type_common.restrict_flag);
5395 WB (t->type_common.string_flag);
5396 WB (t->type_common.lang_flag_0);
5397 WB (t->type_common.lang_flag_1);
5398 WB (t->type_common.lang_flag_2);
5399 WB (t->type_common.lang_flag_3);
5400 WB (t->type_common.lang_flag_4);
5401 WB (t->type_common.lang_flag_5);
5402 WB (t->type_common.lang_flag_6);
5403 WB (t->type_common.typeless_storage);
5404 }
5405
5406 if (TREE_CODE_CLASS (code) != tcc_declaration)
5407 return;
5408
5409 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5410 {
5411 WB (t->decl_common.nonlocal_flag);
5412 WB (t->decl_common.virtual_flag);
5413 WB (t->decl_common.ignored_flag);
5414 WB (t->decl_common.abstract_flag);
5415 WB (t->decl_common.artificial_flag);
5416 WB (t->decl_common.preserve_flag);
5417 WB (t->decl_common.debug_expr_is_from);
5418 WB (t->decl_common.lang_flag_0);
5419 WB (t->decl_common.lang_flag_1);
5420 WB (t->decl_common.lang_flag_2);
5421 WB (t->decl_common.lang_flag_3);
5422 WB (t->decl_common.lang_flag_4);
5423
5424 {
5425 /* This is DECL_INTERFACE_KNOWN: We should redetermine whether
5426 we need to import or export any vtables or typeinfo objects
5427 on stream-in. */
5428 bool interface_known = t->decl_common.lang_flag_5;
5429 if (VAR_P (t) && (DECL_VTABLE_OR_VTT_P (t) || DECL_TINFO_P (t)))
5430 interface_known = false;
5431 WB (interface_known);
5432 }
5433
5434 WB (t->decl_common.lang_flag_6);
5435 WB (t->decl_common.lang_flag_7);
5436 WB (t->decl_common.lang_flag_8);
5437 WB (t->decl_common.decl_flag_0);
5438
5439 {
5440 /* DECL_EXTERNAL -> decl_flag_1
5441 == it is defined elsewhere
5442 DECL_NOT_REALLY_EXTERN -> base.not_really_extern
5443 == that was a lie, it is here */
5444
5445 bool is_external = t->decl_common.decl_flag_1;
5446 if (!is_external)
5447 /* decl_flag_1 is DECL_EXTERNAL. Things we emit here, might
5448 well be external from the POV of an importer. */
5449 // FIXME: Do we need to know if this is a TEMPLATE_RESULT --
5450 // a flag from the caller?
5451 switch (code)
5452 {
5453 default:
5454 break;
5455
5456 case VAR_DECL:
5457 if (TREE_PUBLIC (t)
5458 && !(TREE_STATIC (t)
5459 && DECL_FUNCTION_SCOPE_P (t)
5460 && DECL_DECLARED_INLINE_P (DECL_CONTEXT (t)))
5461 && !DECL_VAR_DECLARED_INLINE_P (t))
5462 is_external = true;
5463 break;
5464
5465 case FUNCTION_DECL:
5466 if (TREE_PUBLIC (t)
5467 && !DECL_DECLARED_INLINE_P (t))
5468 is_external = true;
5469 break;
5470 }
5471 WB (is_external);
5472 }
5473
5474 WB (t->decl_common.decl_flag_2);
5475 WB (t->decl_common.decl_flag_3);
5476 WB (t->decl_common.not_gimple_reg_flag);
5477 WB (t->decl_common.decl_by_reference_flag);
5478 WB (t->decl_common.decl_read_flag);
5479 WB (t->decl_common.decl_nonshareable_flag);
5480 WB (t->decl_common.decl_not_flexarray);
5481 }
5482 else
5483 return;
5484
5485 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5486 {
5487 WB (t->decl_with_vis.defer_output);
5488 WB (t->decl_with_vis.hard_register);
5489 WB (t->decl_with_vis.common_flag);
5490 WB (t->decl_with_vis.in_text_section);
5491 WB (t->decl_with_vis.in_constant_pool);
5492 WB (t->decl_with_vis.dllimport_flag);
5493 WB (t->decl_with_vis.weak_flag);
5494 WB (t->decl_with_vis.seen_in_bind_expr);
5495 WB (t->decl_with_vis.comdat_flag);
5496 WB (t->decl_with_vis.visibility_specified);
5497 WB (t->decl_with_vis.init_priority_p);
5498 WB (t->decl_with_vis.shadowed_for_var_p);
5499 WB (t->decl_with_vis.cxx_constructor);
5500 WB (t->decl_with_vis.cxx_destructor);
5501 WB (t->decl_with_vis.final);
5502 WB (t->decl_with_vis.regdecl_flag);
5503 }
5504 else
5505 return;
5506
5507 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5508 {
5509 WB (t->function_decl.static_ctor_flag);
5510 WB (t->function_decl.static_dtor_flag);
5511 WB (t->function_decl.uninlinable);
5512 WB (t->function_decl.possibly_inlined);
5513 WB (t->function_decl.novops_flag);
5514 WB (t->function_decl.returns_twice_flag);
5515 WB (t->function_decl.malloc_flag);
5516 WB (t->function_decl.declared_inline_flag);
5517 WB (t->function_decl.no_inline_warning_flag);
5518 WB (t->function_decl.no_instrument_function_entry_exit);
5519 WB (t->function_decl.no_limit_stack);
5520 WB (t->function_decl.disregard_inline_limits);
5521 WB (t->function_decl.pure_flag);
5522 WB (t->function_decl.looping_const_or_pure_flag);
5523
5524 WB (t->function_decl.has_debug_args_flag);
5525 WB (t->function_decl.versioned_function);
5526
5527 /* decl_type is a (misnamed) 2 bit discriminator. */
5528 unsigned kind = t->function_decl.decl_type;
5529 WB ((kind >> 0) & 1);
5530 WB ((kind >> 1) & 1);
5531 }
5532 #undef WB_IF
5533 #undef WB
5534 }
5535
5536 bool
5537 trees_in::core_bools (tree t, bits_in& bits)
5538 {
5539 #define RB(X) ((X) = bits.b ())
5540 /* See the comment for WB_IF in trees_out::core_bools. */
5541 #define RB_IF(COND, X) ((COND) ? RB (X) : bits.b ())
5542
5543 tree_code code = TREE_CODE (t);
5544
5545 RB (t->base.side_effects_flag);
5546 RB (t->base.constant_flag);
5547 RB (t->base.addressable_flag);
5548 RB (t->base.volatile_flag);
5549 RB (t->base.readonly_flag);
5550 /* base.asm_written_flag is not streamed. */
5551 RB (t->base.nowarning_flag);
5552 /* base.visited is not streamed. */
5553 /* base.used_flag is not streamed. */
5554 RB (t->base.nothrow_flag);
5555 RB (t->base.static_flag);
5556 RB_IF (TREE_CODE_CLASS (code) != tcc_type, t->base.public_flag);
5557 RB (t->base.private_flag);
5558 RB (t->base.protected_flag);
5559 RB (t->base.deprecated_flag);
5560 RB (t->base.default_def_flag);
5561
5562 switch (code)
5563 {
5564 case CALL_EXPR:
5565 case INTEGER_CST:
5566 case SSA_NAME:
5567 case TARGET_MEM_REF:
5568 case TREE_VEC:
5569 /* These use different base.u fields. */
5570 goto done;
5571
5572 default:
5573 RB (t->base.u.bits.lang_flag_0);
5574 RB (t->base.u.bits.lang_flag_1);
5575 RB (t->base.u.bits.lang_flag_2);
5576 RB (t->base.u.bits.lang_flag_3);
5577 RB (t->base.u.bits.lang_flag_4);
5578 RB (t->base.u.bits.lang_flag_5);
5579 RB (t->base.u.bits.lang_flag_6);
5580 RB (t->base.u.bits.saturating_flag);
5581 RB (t->base.u.bits.unsigned_flag);
5582 RB (t->base.u.bits.packed_flag);
5583 RB (t->base.u.bits.user_align);
5584 RB (t->base.u.bits.nameless_flag);
5585 RB (t->base.u.bits.atomic_flag);
5586 RB (t->base.u.bits.unavailable_flag);
5587 break;
5588 }
5589
5590 if (TREE_CODE_CLASS (code) == tcc_type)
5591 {
5592 RB (t->type_common.no_force_blk_flag);
5593 RB (t->type_common.needs_constructing_flag);
5594 RB (t->type_common.transparent_aggr_flag);
5595 RB (t->type_common.restrict_flag);
5596 RB (t->type_common.string_flag);
5597 RB (t->type_common.lang_flag_0);
5598 RB (t->type_common.lang_flag_1);
5599 RB (t->type_common.lang_flag_2);
5600 RB (t->type_common.lang_flag_3);
5601 RB (t->type_common.lang_flag_4);
5602 RB (t->type_common.lang_flag_5);
5603 RB (t->type_common.lang_flag_6);
5604 RB (t->type_common.typeless_storage);
5605 }
5606
5607 if (TREE_CODE_CLASS (code) != tcc_declaration)
5608 goto done;
5609
5610 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5611 {
5612 RB (t->decl_common.nonlocal_flag);
5613 RB (t->decl_common.virtual_flag);
5614 RB (t->decl_common.ignored_flag);
5615 RB (t->decl_common.abstract_flag);
5616 RB (t->decl_common.artificial_flag);
5617 RB (t->decl_common.preserve_flag);
5618 RB (t->decl_common.debug_expr_is_from);
5619 RB (t->decl_common.lang_flag_0);
5620 RB (t->decl_common.lang_flag_1);
5621 RB (t->decl_common.lang_flag_2);
5622 RB (t->decl_common.lang_flag_3);
5623 RB (t->decl_common.lang_flag_4);
5624 RB (t->decl_common.lang_flag_5);
5625 RB (t->decl_common.lang_flag_6);
5626 RB (t->decl_common.lang_flag_7);
5627 RB (t->decl_common.lang_flag_8);
5628 RB (t->decl_common.decl_flag_0);
5629 RB (t->decl_common.decl_flag_1);
5630 RB (t->decl_common.decl_flag_2);
5631 RB (t->decl_common.decl_flag_3);
5632 RB (t->decl_common.not_gimple_reg_flag);
5633 RB (t->decl_common.decl_by_reference_flag);
5634 RB (t->decl_common.decl_read_flag);
5635 RB (t->decl_common.decl_nonshareable_flag);
5636 RB (t->decl_common.decl_not_flexarray);
5637 }
5638 else
5639 goto done;
5640
5641 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5642 {
5643 RB (t->decl_with_vis.defer_output);
5644 RB (t->decl_with_vis.hard_register);
5645 RB (t->decl_with_vis.common_flag);
5646 RB (t->decl_with_vis.in_text_section);
5647 RB (t->decl_with_vis.in_constant_pool);
5648 RB (t->decl_with_vis.dllimport_flag);
5649 RB (t->decl_with_vis.weak_flag);
5650 RB (t->decl_with_vis.seen_in_bind_expr);
5651 RB (t->decl_with_vis.comdat_flag);
5652 RB (t->decl_with_vis.visibility_specified);
5653 RB (t->decl_with_vis.init_priority_p);
5654 RB (t->decl_with_vis.shadowed_for_var_p);
5655 RB (t->decl_with_vis.cxx_constructor);
5656 RB (t->decl_with_vis.cxx_destructor);
5657 RB (t->decl_with_vis.final);
5658 RB (t->decl_with_vis.regdecl_flag);
5659 }
5660 else
5661 goto done;
5662
5663 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5664 {
5665 RB (t->function_decl.static_ctor_flag);
5666 RB (t->function_decl.static_dtor_flag);
5667 RB (t->function_decl.uninlinable);
5668 RB (t->function_decl.possibly_inlined);
5669 RB (t->function_decl.novops_flag);
5670 RB (t->function_decl.returns_twice_flag);
5671 RB (t->function_decl.malloc_flag);
5672 RB (t->function_decl.declared_inline_flag);
5673 RB (t->function_decl.no_inline_warning_flag);
5674 RB (t->function_decl.no_instrument_function_entry_exit);
5675 RB (t->function_decl.no_limit_stack);
5676 RB (t->function_decl.disregard_inline_limits);
5677 RB (t->function_decl.pure_flag);
5678 RB (t->function_decl.looping_const_or_pure_flag);
5679
5680 RB (t->function_decl.has_debug_args_flag);
5681 RB (t->function_decl.versioned_function);
5682
5683 /* decl_type is a (misnamed) 2 bit discriminator. */
5684 unsigned kind = 0;
5685 kind |= unsigned (bits.b ()) << 0;
5686 kind |= unsigned (bits.b ()) << 1;
5687 t->function_decl.decl_type = function_decl_type (kind);
5688 }
5689 #undef RB_IF
5690 #undef RB
5691 done:
5692 return !get_overrun ();
5693 }
5694
5695 void
5696 trees_out::lang_decl_bools (tree t, bits_out& bits)
5697 {
5698 #define WB(X) (bits.b (X))
5699 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5700
5701 bits.bflush ();
5702 WB (lang->u.base.language == lang_cplusplus);
5703 WB ((lang->u.base.use_template >> 0) & 1);
5704 WB ((lang->u.base.use_template >> 1) & 1);
5705 /* Do not write lang->u.base.not_really_extern, importer will set
5706 when reading the definition (if any). */
5707 WB (lang->u.base.initialized_in_class);
5708 WB (lang->u.base.threadprivate_or_deleted_p);
5709 /* Do not write lang->u.base.anticipated_p, it is a property of the
5710 current TU. */
5711 WB (lang->u.base.friend_or_tls);
5712 WB (lang->u.base.unknown_bound_p);
5713 /* Do not write lang->u.base.odr_used, importer will recalculate if
5714 they do ODR use this decl. */
5715 WB (lang->u.base.concept_p);
5716 WB (lang->u.base.var_declared_inline_p);
5717 WB (lang->u.base.dependent_init_p);
5718 /* When building a header unit, everthing is marked as purview, (so
5719 we know which decls to write). But when we import them we do not
5720 want to mark them as in module purview. */
5721 WB (lang->u.base.module_purview_p && !header_module_p ());
5722 WB (lang->u.base.module_attach_p);
5723 WB (lang->u.base.module_keyed_decls_p);
5724 switch (lang->u.base.selector)
5725 {
5726 default:
5727 gcc_unreachable ();
5728
5729 case lds_fn: /* lang_decl_fn. */
5730 WB (lang->u.fn.global_ctor_p);
5731 WB (lang->u.fn.global_dtor_p);
5732 WB (lang->u.fn.static_function);
5733 WB (lang->u.fn.pure_virtual);
5734 WB (lang->u.fn.defaulted_p);
5735 WB (lang->u.fn.has_in_charge_parm_p);
5736 WB (lang->u.fn.has_vtt_parm_p);
5737 /* There shouldn't be a pending inline at this point. */
5738 gcc_assert (!lang->u.fn.pending_inline_p);
5739 WB (lang->u.fn.nonconverting);
5740 WB (lang->u.fn.thunk_p);
5741 WB (lang->u.fn.this_thunk_p);
5742 /* Do not stream lang->u.hidden_friend_p, it is a property of
5743 the TU. */
5744 WB (lang->u.fn.omp_declare_reduction_p);
5745 WB (lang->u.fn.has_dependent_explicit_spec_p);
5746 WB (lang->u.fn.immediate_fn_p);
5747 WB (lang->u.fn.maybe_deleted);
5748 /* We do not stream lang->u.fn.implicit_constexpr. */
5749 WB (lang->u.fn.escalated_p);
5750 WB (lang->u.fn.xobj_func);
5751 goto lds_min;
5752
5753 case lds_decomp: /* lang_decl_decomp. */
5754 /* No bools. */
5755 goto lds_min;
5756
5757 case lds_min: /* lang_decl_min. */
5758 lds_min:
5759 /* No bools. */
5760 break;
5761
5762 case lds_ns: /* lang_decl_ns. */
5763 /* No bools. */
5764 break;
5765
5766 case lds_parm: /* lang_decl_parm. */
5767 /* No bools. */
5768 break;
5769 }
5770 #undef WB
5771 }
5772
5773 bool
5774 trees_in::lang_decl_bools (tree t, bits_in& bits)
5775 {
5776 #define RB(X) ((X) = bits.b ())
5777 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5778
5779 bits.bflush ();
5780 lang->u.base.language = bits.b () ? lang_cplusplus : lang_c;
5781 unsigned v;
5782 v = bits.b () << 0;
5783 v |= bits.b () << 1;
5784 lang->u.base.use_template = v;
5785 /* lang->u.base.not_really_extern is not streamed. */
5786 RB (lang->u.base.initialized_in_class);
5787 RB (lang->u.base.threadprivate_or_deleted_p);
5788 /* lang->u.base.anticipated_p is not streamed. */
5789 RB (lang->u.base.friend_or_tls);
5790 RB (lang->u.base.unknown_bound_p);
5791 /* lang->u.base.odr_used is not streamed. */
5792 RB (lang->u.base.concept_p);
5793 RB (lang->u.base.var_declared_inline_p);
5794 RB (lang->u.base.dependent_init_p);
5795 RB (lang->u.base.module_purview_p);
5796 RB (lang->u.base.module_attach_p);
5797 RB (lang->u.base.module_keyed_decls_p);
5798 switch (lang->u.base.selector)
5799 {
5800 default:
5801 gcc_unreachable ();
5802
5803 case lds_fn: /* lang_decl_fn. */
5804 RB (lang->u.fn.global_ctor_p);
5805 RB (lang->u.fn.global_dtor_p);
5806 RB (lang->u.fn.static_function);
5807 RB (lang->u.fn.pure_virtual);
5808 RB (lang->u.fn.defaulted_p);
5809 RB (lang->u.fn.has_in_charge_parm_p);
5810 RB (lang->u.fn.has_vtt_parm_p);
5811 RB (lang->u.fn.nonconverting);
5812 RB (lang->u.fn.thunk_p);
5813 RB (lang->u.fn.this_thunk_p);
5814 /* lang->u.fn.hidden_friend_p is not streamed. */
5815 RB (lang->u.fn.omp_declare_reduction_p);
5816 RB (lang->u.fn.has_dependent_explicit_spec_p);
5817 RB (lang->u.fn.immediate_fn_p);
5818 RB (lang->u.fn.maybe_deleted);
5819 /* We do not stream lang->u.fn.implicit_constexpr. */
5820 RB (lang->u.fn.escalated_p);
5821 RB (lang->u.fn.xobj_func);
5822 goto lds_min;
5823
5824 case lds_decomp: /* lang_decl_decomp. */
5825 /* No bools. */
5826 goto lds_min;
5827
5828 case lds_min: /* lang_decl_min. */
5829 lds_min:
5830 /* No bools. */
5831 break;
5832
5833 case lds_ns: /* lang_decl_ns. */
5834 /* No bools. */
5835 break;
5836
5837 case lds_parm: /* lang_decl_parm. */
5838 /* No bools. */
5839 break;
5840 }
5841 #undef RB
5842 return !get_overrun ();
5843 }
5844
5845 void
5846 trees_out::lang_type_bools (tree t, bits_out& bits)
5847 {
5848 #define WB(X) (bits.b (X))
5849 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5850
5851 bits.bflush ();
5852 WB (lang->has_type_conversion);
5853 WB (lang->has_copy_ctor);
5854 WB (lang->has_default_ctor);
5855 WB (lang->const_needs_init);
5856 WB (lang->ref_needs_init);
5857 WB (lang->has_const_copy_assign);
5858 WB ((lang->use_template >> 0) & 1);
5859 WB ((lang->use_template >> 1) & 1);
5860
5861 WB (lang->has_mutable);
5862 WB (lang->com_interface);
5863 WB (lang->non_pod_class);
5864 WB (lang->nearly_empty_p);
5865 WB (lang->user_align);
5866 WB (lang->has_copy_assign);
5867 WB (lang->has_new);
5868 WB (lang->has_array_new);
5869
5870 WB ((lang->gets_delete >> 0) & 1);
5871 WB ((lang->gets_delete >> 1) & 1);
5872 WB (lang->interface_only);
5873 WB (lang->interface_unknown);
5874 WB (lang->contains_empty_class_p);
5875 WB (lang->anon_aggr);
5876 WB (lang->non_zero_init);
5877 WB (lang->empty_p);
5878
5879 WB (lang->vec_new_uses_cookie);
5880 WB (lang->declared_class);
5881 WB (lang->diamond_shaped);
5882 WB (lang->repeated_base);
5883 gcc_assert (!lang->being_defined);
5884 // lang->debug_requested
5885 WB (lang->fields_readonly);
5886 WB (lang->ptrmemfunc_flag);
5887
5888 WB (lang->lazy_default_ctor);
5889 WB (lang->lazy_copy_ctor);
5890 WB (lang->lazy_copy_assign);
5891 WB (lang->lazy_destructor);
5892 WB (lang->has_const_copy_ctor);
5893 WB (lang->has_complex_copy_ctor);
5894 WB (lang->has_complex_copy_assign);
5895 WB (lang->non_aggregate);
5896
5897 WB (lang->has_complex_dflt);
5898 WB (lang->has_list_ctor);
5899 WB (lang->non_std_layout);
5900 WB (lang->is_literal);
5901 WB (lang->lazy_move_ctor);
5902 WB (lang->lazy_move_assign);
5903 WB (lang->has_complex_move_ctor);
5904 WB (lang->has_complex_move_assign);
5905
5906 WB (lang->has_constexpr_ctor);
5907 WB (lang->unique_obj_representations);
5908 WB (lang->unique_obj_representations_set);
5909 #undef WB
5910 }
5911
5912 bool
5913 trees_in::lang_type_bools (tree t, bits_in& bits)
5914 {
5915 #define RB(X) ((X) = bits.b ())
5916 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5917
5918 bits.bflush ();
5919 RB (lang->has_type_conversion);
5920 RB (lang->has_copy_ctor);
5921 RB (lang->has_default_ctor);
5922 RB (lang->const_needs_init);
5923 RB (lang->ref_needs_init);
5924 RB (lang->has_const_copy_assign);
5925 unsigned v;
5926 v = bits.b () << 0;
5927 v |= bits.b () << 1;
5928 lang->use_template = v;
5929
5930 RB (lang->has_mutable);
5931 RB (lang->com_interface);
5932 RB (lang->non_pod_class);
5933 RB (lang->nearly_empty_p);
5934 RB (lang->user_align);
5935 RB (lang->has_copy_assign);
5936 RB (lang->has_new);
5937 RB (lang->has_array_new);
5938
5939 v = bits.b () << 0;
5940 v |= bits.b () << 1;
5941 lang->gets_delete = v;
5942 RB (lang->interface_only);
5943 RB (lang->interface_unknown);
5944 RB (lang->contains_empty_class_p);
5945 RB (lang->anon_aggr);
5946 RB (lang->non_zero_init);
5947 RB (lang->empty_p);
5948
5949 RB (lang->vec_new_uses_cookie);
5950 RB (lang->declared_class);
5951 RB (lang->diamond_shaped);
5952 RB (lang->repeated_base);
5953 gcc_assert (!lang->being_defined);
5954 gcc_assert (!lang->debug_requested);
5955 RB (lang->fields_readonly);
5956 RB (lang->ptrmemfunc_flag);
5957
5958 RB (lang->lazy_default_ctor);
5959 RB (lang->lazy_copy_ctor);
5960 RB (lang->lazy_copy_assign);
5961 RB (lang->lazy_destructor);
5962 RB (lang->has_const_copy_ctor);
5963 RB (lang->has_complex_copy_ctor);
5964 RB (lang->has_complex_copy_assign);
5965 RB (lang->non_aggregate);
5966
5967 RB (lang->has_complex_dflt);
5968 RB (lang->has_list_ctor);
5969 RB (lang->non_std_layout);
5970 RB (lang->is_literal);
5971 RB (lang->lazy_move_ctor);
5972 RB (lang->lazy_move_assign);
5973 RB (lang->has_complex_move_ctor);
5974 RB (lang->has_complex_move_assign);
5975
5976 RB (lang->has_constexpr_ctor);
5977 RB (lang->unique_obj_representations);
5978 RB (lang->unique_obj_representations_set);
5979 #undef RB
5980 return !get_overrun ();
5981 }
5982
5983 /* Read & write the core values and pointers. */
5984
5985 void
5986 trees_out::core_vals (tree t)
5987 {
5988 #define WU(X) (u (X))
5989 #define WT(X) (tree_node (X))
5990 tree_code code = TREE_CODE (t);
5991
5992 /* First by shape of the tree. */
5993
5994 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
5995 {
5996 /* Write this early, for better log information. */
5997 WT (t->decl_minimal.name);
5998 if (!DECL_TEMPLATE_PARM_P (t))
5999 WT (t->decl_minimal.context);
6000
6001 if (state)
6002 state->write_location (*this, t->decl_minimal.locus);
6003 }
6004
6005 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
6006 {
6007 /* The only types we write also have TYPE_NON_COMMON. */
6008 gcc_checking_assert (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON));
6009
6010 /* We only stream the main variant. */
6011 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
6012
6013 /* Stream the name & context first, for better log information */
6014 WT (t->type_common.name);
6015 WT (t->type_common.context);
6016
6017 /* By construction we want to make sure we have the canonical
6018 and main variants already in the type table, so emit them
6019 now. */
6020 WT (t->type_common.main_variant);
6021
6022 tree canonical = t->type_common.canonical;
6023 if (canonical && DECL_TEMPLATE_PARM_P (TYPE_NAME (t)))
6024 /* We do not want to wander into different templates.
6025 Reconstructed on stream in. */
6026 canonical = t;
6027 WT (canonical);
6028
6029 /* type_common.next_variant is internally manipulated. */
6030 /* type_common.pointer_to, type_common.reference_to. */
6031
6032 if (streaming_p ())
6033 {
6034 WU (t->type_common.precision);
6035 WU (t->type_common.contains_placeholder_bits);
6036 WU (t->type_common.mode);
6037 WU (t->type_common.align);
6038 }
6039
6040 if (!RECORD_OR_UNION_CODE_P (code))
6041 {
6042 WT (t->type_common.size);
6043 WT (t->type_common.size_unit);
6044 }
6045 WT (t->type_common.attributes);
6046
6047 WT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
6048 }
6049
6050 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
6051 {
6052 if (streaming_p ())
6053 {
6054 WU (t->decl_common.mode);
6055 WU (t->decl_common.off_align);
6056 WU (t->decl_common.align);
6057 }
6058
6059 /* For templates these hold instantiation (partial and/or
6060 specialization) information. */
6061 if (code != TEMPLATE_DECL)
6062 {
6063 WT (t->decl_common.size);
6064 WT (t->decl_common.size_unit);
6065 }
6066
6067 WT (t->decl_common.attributes);
6068 // FIXME: Does this introduce cross-decl links? For instance
6069 // from instantiation to the template. If so, we'll need more
6070 // deduplication logic. I think we'll need to walk the blocks
6071 // of the owning function_decl's abstract origin in tandem, to
6072 // generate the locating data needed?
6073 WT (t->decl_common.abstract_origin);
6074 }
6075
6076 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
6077 {
6078 WT (t->decl_with_vis.assembler_name);
6079 if (streaming_p ())
6080 WU (t->decl_with_vis.visibility);
6081 }
6082
6083 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
6084 {
6085 if (code == ENUMERAL_TYPE)
6086 {
6087 /* These fields get set even for opaque enums that lack a
6088 definition, so we stream them directly for each ENUMERAL_TYPE.
6089 We stream TYPE_VALUES as part of the definition. */
6090 WT (t->type_non_common.maxval);
6091 WT (t->type_non_common.minval);
6092 }
6093 /* Records and unions hold FIELDS, VFIELD & BINFO on these
6094 things. */
6095 else if (!RECORD_OR_UNION_CODE_P (code))
6096 {
6097 // FIXME: These are from tpl_parm_value's 'type' writing.
6098 // Perhaps it should just be doing them directly?
6099 gcc_checking_assert (code == TEMPLATE_TYPE_PARM
6100 || code == TEMPLATE_TEMPLATE_PARM
6101 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
6102 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
6103 WT (t->type_non_common.values);
6104 WT (t->type_non_common.maxval);
6105 WT (t->type_non_common.minval);
6106 }
6107
6108 WT (t->type_non_common.lang_1);
6109 }
6110
6111 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6112 {
6113 if (state)
6114 state->write_location (*this, t->exp.locus);
6115
6116 /* Walk in forward order, as (for instance) REQUIRES_EXPR has a
6117 bunch of unscoped parms on its first operand. It's safer to
6118 create those in order. */
6119 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6120 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6121 : TREE_OPERAND_LENGTH (t)),
6122 ix = unsigned (vl); ix != limit; ix++)
6123 WT (TREE_OPERAND (t, ix));
6124 }
6125 else
6126 /* The CODE_CONTAINS tables were inaccurate when I started. */
6127 gcc_checking_assert (TREE_CODE_CLASS (code) != tcc_expression
6128 && TREE_CODE_CLASS (code) != tcc_binary
6129 && TREE_CODE_CLASS (code) != tcc_unary
6130 && TREE_CODE_CLASS (code) != tcc_reference
6131 && TREE_CODE_CLASS (code) != tcc_comparison
6132 && TREE_CODE_CLASS (code) != tcc_statement
6133 && TREE_CODE_CLASS (code) != tcc_vl_exp);
6134
6135 /* Then by CODE. Special cases and/or 1:1 tree shape
6136 correspondance. */
6137 switch (code)
6138 {
6139 default:
6140 break;
6141
6142 case ARGUMENT_PACK_SELECT: /* Transient during instantiation. */
6143 case DEFERRED_PARSE: /* Expanded upon completion of
6144 outermost class. */
6145 case IDENTIFIER_NODE: /* Streamed specially. */
6146 case BINDING_VECTOR: /* Only in namespace-scope symbol
6147 table. */
6148 case SSA_NAME:
6149 case TRANSLATION_UNIT_DECL: /* There is only one, it is a
6150 global_tree. */
6151 case USERDEF_LITERAL: /* Expanded during parsing. */
6152 gcc_unreachable (); /* Should never meet. */
6153
6154 /* Constants. */
6155 case COMPLEX_CST:
6156 WT (TREE_REALPART (t));
6157 WT (TREE_IMAGPART (t));
6158 break;
6159
6160 case FIXED_CST:
6161 gcc_unreachable (); /* Not supported in C++. */
6162
6163 case INTEGER_CST:
6164 if (streaming_p ())
6165 {
6166 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6167 for (unsigned ix = 0; ix != num; ix++)
6168 wu (TREE_INT_CST_ELT (t, ix));
6169 }
6170 break;
6171
6172 case POLY_INT_CST:
6173 if (streaming_p ())
6174 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
6175 WT (POLY_INT_CST_COEFF (t, ix));
6176 break;
6177
6178 case REAL_CST:
6179 if (streaming_p ())
6180 buf (TREE_REAL_CST_PTR (t), sizeof (real_value));
6181 break;
6182
6183 case STRING_CST:
6184 /* Streamed during start. */
6185 break;
6186
6187 case VECTOR_CST:
6188 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6189 WT (VECTOR_CST_ENCODED_ELT (t, ix));
6190 break;
6191
6192 /* Decls. */
6193 case VAR_DECL:
6194 if (DECL_CONTEXT (t)
6195 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6196 break;
6197 /* FALLTHROUGH */
6198
6199 case RESULT_DECL:
6200 case PARM_DECL:
6201 if (DECL_HAS_VALUE_EXPR_P (t))
6202 WT (DECL_VALUE_EXPR (t));
6203 /* FALLTHROUGH */
6204
6205 case CONST_DECL:
6206 case IMPORTED_DECL:
6207 WT (t->decl_common.initial);
6208 break;
6209
6210 case FIELD_DECL:
6211 WT (t->field_decl.offset);
6212 WT (t->field_decl.bit_field_type);
6213 WT (t->field_decl.qualifier); /* bitfield unit. */
6214 WT (t->field_decl.bit_offset);
6215 WT (t->field_decl.fcontext);
6216 WT (t->decl_common.initial);
6217 break;
6218
6219 case LABEL_DECL:
6220 if (streaming_p ())
6221 {
6222 WU (t->label_decl.label_decl_uid);
6223 WU (t->label_decl.eh_landing_pad_nr);
6224 }
6225 break;
6226
6227 case FUNCTION_DECL:
6228 if (streaming_p ())
6229 {
6230 /* Builtins can be streamed by value when a header declares
6231 them. */
6232 WU (DECL_BUILT_IN_CLASS (t));
6233 if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
6234 WU (DECL_UNCHECKED_FUNCTION_CODE (t));
6235 }
6236
6237 WT (t->function_decl.personality);
6238 WT (t->function_decl.function_specific_target);
6239 WT (t->function_decl.function_specific_optimization);
6240 WT (t->function_decl.vindex);
6241
6242 if (DECL_HAS_DEPENDENT_EXPLICIT_SPEC_P (t))
6243 WT (lookup_explicit_specifier (t));
6244 break;
6245
6246 case USING_DECL:
6247 /* USING_DECL_DECLS */
6248 WT (t->decl_common.initial);
6249 /* FALLTHROUGH */
6250
6251 case TYPE_DECL:
6252 /* USING_DECL: USING_DECL_SCOPE */
6253 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6254 WT (t->decl_non_common.result);
6255 break;
6256
6257 /* Miscellaneous common nodes. */
6258 case BLOCK:
6259 if (state)
6260 {
6261 state->write_location (*this, t->block.locus);
6262 state->write_location (*this, t->block.end_locus);
6263 }
6264
6265 /* DECL_LOCAL_DECL_P decls are first encountered here and
6266 streamed by value. */
6267 for (tree decls = t->block.vars; decls; decls = DECL_CHAIN (decls))
6268 {
6269 if (VAR_OR_FUNCTION_DECL_P (decls)
6270 && DECL_LOCAL_DECL_P (decls))
6271 {
6272 /* Make sure this is the first encounter, and mark for
6273 walk-by-value. */
6274 gcc_checking_assert (!TREE_VISITED (decls)
6275 && !DECL_TEMPLATE_INFO (decls));
6276 mark_by_value (decls);
6277 }
6278 tree_node (decls);
6279 }
6280 tree_node (NULL_TREE);
6281
6282 /* nonlocalized_vars is a middle-end thing. */
6283 WT (t->block.subblocks);
6284 WT (t->block.supercontext);
6285 // FIXME: As for decl's abstract_origin, does this introduce crosslinks?
6286 WT (t->block.abstract_origin);
6287 /* fragment_origin, fragment_chain are middle-end things. */
6288 WT (t->block.chain);
6289 /* nonlocalized_vars, block_num & die are middle endy/debug
6290 things. */
6291 break;
6292
6293 case CALL_EXPR:
6294 if (streaming_p ())
6295 WU (t->base.u.ifn);
6296 break;
6297
6298 case CONSTRUCTOR:
6299 // This must be streamed /after/ we've streamed the type,
6300 // because it can directly refer to elements of the type. Eg,
6301 // FIELD_DECLs of a RECORD_TYPE.
6302 break;
6303
6304 case OMP_CLAUSE:
6305 {
6306 /* The ompcode is serialized in start. */
6307 if (streaming_p ())
6308 WU (t->omp_clause.subcode.map_kind);
6309 if (state)
6310 state->write_location (*this, t->omp_clause.locus);
6311
6312 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6313 for (unsigned ix = 0; ix != len; ix++)
6314 WT (t->omp_clause.ops[ix]);
6315 }
6316 break;
6317
6318 case STATEMENT_LIST:
6319 for (tree stmt : tsi_range (t))
6320 if (stmt)
6321 WT (stmt);
6322 WT (NULL_TREE);
6323 break;
6324
6325 case OPTIMIZATION_NODE:
6326 case TARGET_OPTION_NODE:
6327 // FIXME: Our representation for these two nodes is a cache of
6328 // the resulting set of options. Not a record of the options
6329 // that got changed by a particular attribute or pragma. Should
6330 // we record that, or should we record the diff from the command
6331 // line options? The latter seems the right behaviour, but is
6332 // (a) harder, and I guess could introduce strangeness if the
6333 // importer has set some incompatible set of optimization flags?
6334 gcc_unreachable ();
6335 break;
6336
6337 case TREE_BINFO:
6338 {
6339 WT (t->binfo.common.chain);
6340 WT (t->binfo.offset);
6341 WT (t->binfo.inheritance);
6342 WT (t->binfo.vptr_field);
6343
6344 WT (t->binfo.vtable);
6345 WT (t->binfo.virtuals);
6346 WT (t->binfo.vtt_subvtt);
6347 WT (t->binfo.vtt_vptr);
6348
6349 tree_vec (BINFO_BASE_ACCESSES (t));
6350 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6351 for (unsigned ix = 0; ix != num; ix++)
6352 WT (BINFO_BASE_BINFO (t, ix));
6353 }
6354 break;
6355
6356 case TREE_LIST:
6357 WT (t->list.purpose);
6358 WT (t->list.value);
6359 WT (t->list.common.chain);
6360 break;
6361
6362 case TREE_VEC:
6363 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6364 WT (TREE_VEC_ELT (t, ix));
6365 /* We stash NON_DEFAULT_TEMPLATE_ARGS_COUNT on TREE_CHAIN! */
6366 gcc_checking_assert (!t->type_common.common.chain
6367 || (TREE_CODE (t->type_common.common.chain)
6368 == INTEGER_CST));
6369 WT (t->type_common.common.chain);
6370 break;
6371
6372 /* C++-specific nodes ... */
6373 case BASELINK:
6374 WT (((lang_tree_node *)t)->baselink.binfo);
6375 WT (((lang_tree_node *)t)->baselink.functions);
6376 WT (((lang_tree_node *)t)->baselink.access_binfo);
6377 break;
6378
6379 case CONSTRAINT_INFO:
6380 WT (((lang_tree_node *)t)->constraint_info.template_reqs);
6381 WT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6382 WT (((lang_tree_node *)t)->constraint_info.associated_constr);
6383 break;
6384
6385 case DEFERRED_NOEXCEPT:
6386 WT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6387 WT (((lang_tree_node *)t)->deferred_noexcept.args);
6388 break;
6389
6390 case LAMBDA_EXPR:
6391 WT (((lang_tree_node *)t)->lambda_expression.capture_list);
6392 WT (((lang_tree_node *)t)->lambda_expression.this_capture);
6393 WT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6394 WT (((lang_tree_node *)t)->lambda_expression.regen_info);
6395 WT (((lang_tree_node *)t)->lambda_expression.extra_args);
6396 /* pending_proxies is a parse-time thing. */
6397 gcc_assert (!((lang_tree_node *)t)->lambda_expression.pending_proxies);
6398 if (state)
6399 state->write_location
6400 (*this, ((lang_tree_node *)t)->lambda_expression.locus);
6401 if (streaming_p ())
6402 {
6403 WU (((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6404 WU (((lang_tree_node *)t)->lambda_expression.discriminator_scope);
6405 WU (((lang_tree_node *)t)->lambda_expression.discriminator_sig);
6406 }
6407 break;
6408
6409 case OVERLOAD:
6410 WT (((lang_tree_node *)t)->overload.function);
6411 WT (t->common.chain);
6412 break;
6413
6414 case PTRMEM_CST:
6415 WT (((lang_tree_node *)t)->ptrmem.member);
6416 break;
6417
6418 case STATIC_ASSERT:
6419 WT (((lang_tree_node *)t)->static_assertion.condition);
6420 WT (((lang_tree_node *)t)->static_assertion.message);
6421 if (state)
6422 state->write_location
6423 (*this, ((lang_tree_node *)t)->static_assertion.location);
6424 break;
6425
6426 case TEMPLATE_DECL:
6427 /* Streamed with the template_decl node itself. */
6428 gcc_checking_assert
6429 (TREE_VISITED (((lang_tree_node *)t)->template_decl.arguments));
6430 gcc_checking_assert
6431 (TREE_VISITED (((lang_tree_node *)t)->template_decl.result));
6432 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6433 WT (DECL_CHAIN (t));
6434 break;
6435
6436 case TEMPLATE_INFO:
6437 {
6438 WT (((lang_tree_node *)t)->template_info.tmpl);
6439 WT (((lang_tree_node *)t)->template_info.args);
6440 WT (((lang_tree_node *)t)->template_info.partial);
6441
6442 const auto *ac = (((lang_tree_node *)t)
6443 ->template_info.deferred_access_checks);
6444 unsigned len = vec_safe_length (ac);
6445 if (streaming_p ())
6446 u (len);
6447 if (len)
6448 {
6449 for (unsigned ix = 0; ix != len; ix++)
6450 {
6451 const auto &m = (*ac)[ix];
6452 WT (m.binfo);
6453 WT (m.decl);
6454 WT (m.diag_decl);
6455 if (state)
6456 state->write_location (*this, m.loc);
6457 }
6458 }
6459 }
6460 break;
6461
6462 case TEMPLATE_PARM_INDEX:
6463 if (streaming_p ())
6464 {
6465 WU (((lang_tree_node *)t)->tpi.index);
6466 WU (((lang_tree_node *)t)->tpi.level);
6467 WU (((lang_tree_node *)t)->tpi.orig_level);
6468 }
6469 WT (((lang_tree_node *)t)->tpi.decl);
6470 /* TEMPLATE_PARM_DESCENDANTS (AKA TREE_CHAIN) is an internal
6471 cache, do not stream. */
6472 break;
6473
6474 case TRAIT_EXPR:
6475 WT (((lang_tree_node *)t)->trait_expression.type1);
6476 WT (((lang_tree_node *)t)->trait_expression.type2);
6477 if (streaming_p ())
6478 WU (((lang_tree_node *)t)->trait_expression.kind);
6479 break;
6480 }
6481
6482 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6483 {
6484 /* We want to stream the type of a expression-like nodes /after/
6485 we've streamed the operands. The type often contains (bits
6486 of the) types of the operands, and with things like decltype
6487 and noexcept in play, we really want to stream the decls
6488 defining the type before we try and stream the type on its
6489 own. Otherwise we can find ourselves trying to read in a
6490 decl, when we're already partially reading in a component of
6491 its type. And that's bad. */
6492 tree type = t->typed.type;
6493 unsigned prec = 0;
6494
6495 switch (code)
6496 {
6497 default:
6498 break;
6499
6500 case TEMPLATE_DECL:
6501 /* We fill in the template's type separately. */
6502 type = NULL_TREE;
6503 break;
6504
6505 case TYPE_DECL:
6506 if (DECL_ORIGINAL_TYPE (t) && t == TYPE_NAME (type))
6507 /* This is a typedef. We set its type separately. */
6508 type = NULL_TREE;
6509 break;
6510
6511 case ENUMERAL_TYPE:
6512 if (type && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6513 {
6514 /* Type is a restricted range integer type derived from the
6515 integer_types. Find the right one. */
6516 prec = TYPE_PRECISION (type);
6517 tree name = DECL_NAME (TYPE_NAME (type));
6518
6519 for (unsigned itk = itk_none; itk--;)
6520 if (integer_types[itk]
6521 && DECL_NAME (TYPE_NAME (integer_types[itk])) == name)
6522 {
6523 type = integer_types[itk];
6524 break;
6525 }
6526 gcc_assert (type != t->typed.type);
6527 }
6528 break;
6529 }
6530
6531 WT (type);
6532 if (prec && streaming_p ())
6533 WU (prec);
6534 }
6535
6536 if (TREE_CODE (t) == CONSTRUCTOR)
6537 {
6538 unsigned len = vec_safe_length (t->constructor.elts);
6539 if (streaming_p ())
6540 WU (len);
6541 if (len)
6542 for (unsigned ix = 0; ix != len; ix++)
6543 {
6544 const constructor_elt &elt = (*t->constructor.elts)[ix];
6545
6546 WT (elt.index);
6547 WT (elt.value);
6548 }
6549 }
6550
6551 #undef WT
6552 #undef WU
6553 }
6554
6555 // Streaming in a reference to a decl can cause that decl to be
6556 // TREE_USED, which is the mark_used behaviour we need most of the
6557 // time. The trees_in::unused can be incremented to inhibit this,
6558 // which is at least needed for vtables.
6559
6560 bool
6561 trees_in::core_vals (tree t)
6562 {
6563 #define RU(X) ((X) = u ())
6564 #define RUC(T,X) ((X) = T (u ()))
6565 #define RT(X) ((X) = tree_node ())
6566 #define RTU(X) ((X) = tree_node (true))
6567 tree_code code = TREE_CODE (t);
6568
6569 /* First by tree shape. */
6570 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
6571 {
6572 RT (t->decl_minimal.name);
6573 if (!DECL_TEMPLATE_PARM_P (t))
6574 RT (t->decl_minimal.context);
6575
6576 /* Don't zap the locus just yet, we don't record it correctly
6577 and thus lose all location information. */
6578 t->decl_minimal.locus = state->read_location (*this);
6579 }
6580
6581 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
6582 {
6583 RT (t->type_common.name);
6584 RT (t->type_common.context);
6585
6586 RT (t->type_common.main_variant);
6587 RT (t->type_common.canonical);
6588
6589 /* type_common.next_variant is internally manipulated. */
6590 /* type_common.pointer_to, type_common.reference_to. */
6591
6592 RU (t->type_common.precision);
6593 RU (t->type_common.contains_placeholder_bits);
6594 RUC (machine_mode, t->type_common.mode);
6595 RU (t->type_common.align);
6596
6597 if (!RECORD_OR_UNION_CODE_P (code))
6598 {
6599 RT (t->type_common.size);
6600 RT (t->type_common.size_unit);
6601 }
6602 RT (t->type_common.attributes);
6603
6604 RT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
6605 }
6606
6607 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
6608 {
6609 RUC (machine_mode, t->decl_common.mode);
6610 RU (t->decl_common.off_align);
6611 RU (t->decl_common.align);
6612
6613 if (code != TEMPLATE_DECL)
6614 {
6615 RT (t->decl_common.size);
6616 RT (t->decl_common.size_unit);
6617 }
6618
6619 RT (t->decl_common.attributes);
6620 RT (t->decl_common.abstract_origin);
6621 }
6622
6623 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
6624 {
6625 RT (t->decl_with_vis.assembler_name);
6626 RUC (symbol_visibility, t->decl_with_vis.visibility);
6627 }
6628
6629 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
6630 {
6631 if (code == ENUMERAL_TYPE)
6632 {
6633 /* These fields get set even for opaque enums that lack a
6634 definition, so we stream them directly for each ENUMERAL_TYPE.
6635 We stream TYPE_VALUES as part of the definition. */
6636 RT (t->type_non_common.maxval);
6637 RT (t->type_non_common.minval);
6638 }
6639 /* Records and unions hold FIELDS, VFIELD & BINFO on these
6640 things. */
6641 else if (!RECORD_OR_UNION_CODE_P (code))
6642 {
6643 /* This is not clobbering TYPE_CACHED_VALUES, because this
6644 is a type that doesn't have any. */
6645 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
6646 RT (t->type_non_common.values);
6647 RT (t->type_non_common.maxval);
6648 RT (t->type_non_common.minval);
6649 }
6650
6651 RT (t->type_non_common.lang_1);
6652 }
6653
6654 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6655 {
6656 t->exp.locus = state->read_location (*this);
6657
6658 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6659 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6660 : TREE_OPERAND_LENGTH (t)),
6661 ix = unsigned (vl); ix != limit; ix++)
6662 RTU (TREE_OPERAND (t, ix));
6663 }
6664
6665 /* Then by CODE. Special cases and/or 1:1 tree shape
6666 correspondance. */
6667 switch (code)
6668 {
6669 default:
6670 break;
6671
6672 case ARGUMENT_PACK_SELECT:
6673 case DEFERRED_PARSE:
6674 case IDENTIFIER_NODE:
6675 case BINDING_VECTOR:
6676 case SSA_NAME:
6677 case TRANSLATION_UNIT_DECL:
6678 case USERDEF_LITERAL:
6679 return false; /* Should never meet. */
6680
6681 /* Constants. */
6682 case COMPLEX_CST:
6683 RT (TREE_REALPART (t));
6684 RT (TREE_IMAGPART (t));
6685 break;
6686
6687 case FIXED_CST:
6688 /* Not suported in C++. */
6689 return false;
6690
6691 case INTEGER_CST:
6692 {
6693 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6694 for (unsigned ix = 0; ix != num; ix++)
6695 TREE_INT_CST_ELT (t, ix) = wu ();
6696 }
6697 break;
6698
6699 case POLY_INT_CST:
6700 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
6701 RT (POLY_INT_CST_COEFF (t, ix));
6702 break;
6703
6704 case REAL_CST:
6705 if (const void *bytes = buf (sizeof (real_value)))
6706 memcpy (TREE_REAL_CST_PTR (t), bytes, sizeof (real_value));
6707 break;
6708
6709 case STRING_CST:
6710 /* Streamed during start. */
6711 break;
6712
6713 case VECTOR_CST:
6714 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6715 RT (VECTOR_CST_ENCODED_ELT (t, ix));
6716 break;
6717
6718 /* Decls. */
6719 case VAR_DECL:
6720 if (DECL_CONTEXT (t)
6721 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6722 break;
6723 /* FALLTHROUGH */
6724
6725 case RESULT_DECL:
6726 case PARM_DECL:
6727 if (DECL_HAS_VALUE_EXPR_P (t))
6728 {
6729 /* The DECL_VALUE hash table is a cache, thus if we're
6730 reading a duplicate (which we end up discarding), the
6731 value expr will also be cleaned up at the next gc. */
6732 tree val = tree_node ();
6733 SET_DECL_VALUE_EXPR (t, val);
6734 }
6735 /* FALLTHROUGH */
6736
6737 case CONST_DECL:
6738 case IMPORTED_DECL:
6739 RT (t->decl_common.initial);
6740 break;
6741
6742 case FIELD_DECL:
6743 RT (t->field_decl.offset);
6744 RT (t->field_decl.bit_field_type);
6745 RT (t->field_decl.qualifier);
6746 RT (t->field_decl.bit_offset);
6747 RT (t->field_decl.fcontext);
6748 RT (t->decl_common.initial);
6749 break;
6750
6751 case LABEL_DECL:
6752 RU (t->label_decl.label_decl_uid);
6753 RU (t->label_decl.eh_landing_pad_nr);
6754 break;
6755
6756 case FUNCTION_DECL:
6757 {
6758 unsigned bltin = u ();
6759 t->function_decl.built_in_class = built_in_class (bltin);
6760 if (bltin != NOT_BUILT_IN)
6761 {
6762 bltin = u ();
6763 DECL_UNCHECKED_FUNCTION_CODE (t) = built_in_function (bltin);
6764 }
6765
6766 RT (t->function_decl.personality);
6767 RT (t->function_decl.function_specific_target);
6768 RT (t->function_decl.function_specific_optimization);
6769 RT (t->function_decl.vindex);
6770
6771 if (DECL_HAS_DEPENDENT_EXPLICIT_SPEC_P (t))
6772 {
6773 tree spec;
6774 RT (spec);
6775 store_explicit_specifier (t, spec);
6776 }
6777 }
6778 break;
6779
6780 case USING_DECL:
6781 /* USING_DECL_DECLS */
6782 RT (t->decl_common.initial);
6783 /* FALLTHROUGH */
6784
6785 case TYPE_DECL:
6786 /* USING_DECL: USING_DECL_SCOPE */
6787 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6788 RT (t->decl_non_common.result);
6789 break;
6790
6791 /* Miscellaneous common nodes. */
6792 case BLOCK:
6793 t->block.locus = state->read_location (*this);
6794 t->block.end_locus = state->read_location (*this);
6795
6796 for (tree *chain = &t->block.vars;;)
6797 if (tree decl = tree_node ())
6798 {
6799 /* For a deduplicated local type or enumerator, chain the
6800 duplicate decl instead of the canonical in-TU decl. Seeing
6801 a duplicate here means the containing function whose body
6802 we're streaming in is a duplicate too, so we'll end up
6803 discarding this BLOCK (and the rest of the duplicate function
6804 body) anyway. */
6805 decl = maybe_duplicate (decl);
6806
6807 if (!DECL_P (decl) || DECL_CHAIN (decl))
6808 {
6809 set_overrun ();
6810 break;
6811 }
6812 *chain = decl;
6813 chain = &DECL_CHAIN (decl);
6814 }
6815 else
6816 break;
6817
6818 /* nonlocalized_vars is middle-end. */
6819 RT (t->block.subblocks);
6820 RT (t->block.supercontext);
6821 RT (t->block.abstract_origin);
6822 /* fragment_origin, fragment_chain are middle-end. */
6823 RT (t->block.chain);
6824 /* nonlocalized_vars, block_num, die are middle endy/debug
6825 things. */
6826 break;
6827
6828 case CALL_EXPR:
6829 RUC (internal_fn, t->base.u.ifn);
6830 break;
6831
6832 case CONSTRUCTOR:
6833 // Streamed after the node's type.
6834 break;
6835
6836 case OMP_CLAUSE:
6837 {
6838 RU (t->omp_clause.subcode.map_kind);
6839 t->omp_clause.locus = state->read_location (*this);
6840
6841 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6842 for (unsigned ix = 0; ix != len; ix++)
6843 RT (t->omp_clause.ops[ix]);
6844 }
6845 break;
6846
6847 case STATEMENT_LIST:
6848 {
6849 tree_stmt_iterator iter = tsi_start (t);
6850 for (tree stmt; RT (stmt);)
6851 tsi_link_after (&iter, stmt, TSI_CONTINUE_LINKING);
6852 }
6853 break;
6854
6855 case OPTIMIZATION_NODE:
6856 case TARGET_OPTION_NODE:
6857 /* Not yet implemented, see trees_out::core_vals. */
6858 gcc_unreachable ();
6859 break;
6860
6861 case TREE_BINFO:
6862 RT (t->binfo.common.chain);
6863 RT (t->binfo.offset);
6864 RT (t->binfo.inheritance);
6865 RT (t->binfo.vptr_field);
6866
6867 /* Do not mark the vtables as USED in the address expressions
6868 here. */
6869 unused++;
6870 RT (t->binfo.vtable);
6871 RT (t->binfo.virtuals);
6872 RT (t->binfo.vtt_subvtt);
6873 RT (t->binfo.vtt_vptr);
6874 unused--;
6875
6876 BINFO_BASE_ACCESSES (t) = tree_vec ();
6877 if (!get_overrun ())
6878 {
6879 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6880 for (unsigned ix = 0; ix != num; ix++)
6881 BINFO_BASE_APPEND (t, tree_node ());
6882 }
6883 break;
6884
6885 case TREE_LIST:
6886 RT (t->list.purpose);
6887 RT (t->list.value);
6888 RT (t->list.common.chain);
6889 break;
6890
6891 case TREE_VEC:
6892 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6893 RT (TREE_VEC_ELT (t, ix));
6894 RT (t->type_common.common.chain);
6895 break;
6896
6897 /* C++-specific nodes ... */
6898 case BASELINK:
6899 RT (((lang_tree_node *)t)->baselink.binfo);
6900 RTU (((lang_tree_node *)t)->baselink.functions);
6901 RT (((lang_tree_node *)t)->baselink.access_binfo);
6902 break;
6903
6904 case CONSTRAINT_INFO:
6905 RT (((lang_tree_node *)t)->constraint_info.template_reqs);
6906 RT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6907 RT (((lang_tree_node *)t)->constraint_info.associated_constr);
6908 break;
6909
6910 case DEFERRED_NOEXCEPT:
6911 RT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6912 RT (((lang_tree_node *)t)->deferred_noexcept.args);
6913 break;
6914
6915 case LAMBDA_EXPR:
6916 RT (((lang_tree_node *)t)->lambda_expression.capture_list);
6917 RT (((lang_tree_node *)t)->lambda_expression.this_capture);
6918 RT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6919 RT (((lang_tree_node *)t)->lambda_expression.regen_info);
6920 RT (((lang_tree_node *)t)->lambda_expression.extra_args);
6921 /* lambda_expression.pending_proxies is NULL */
6922 ((lang_tree_node *)t)->lambda_expression.locus
6923 = state->read_location (*this);
6924 RUC (cp_lambda_default_capture_mode_type,
6925 ((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6926 RU (((lang_tree_node *)t)->lambda_expression.discriminator_scope);
6927 RU (((lang_tree_node *)t)->lambda_expression.discriminator_sig);
6928 break;
6929
6930 case OVERLOAD:
6931 RT (((lang_tree_node *)t)->overload.function);
6932 RT (t->common.chain);
6933 break;
6934
6935 case PTRMEM_CST:
6936 RT (((lang_tree_node *)t)->ptrmem.member);
6937 break;
6938
6939 case STATIC_ASSERT:
6940 RT (((lang_tree_node *)t)->static_assertion.condition);
6941 RT (((lang_tree_node *)t)->static_assertion.message);
6942 ((lang_tree_node *)t)->static_assertion.location
6943 = state->read_location (*this);
6944 break;
6945
6946 case TEMPLATE_DECL:
6947 /* Streamed when reading the raw template decl itself. */
6948 gcc_assert (((lang_tree_node *)t)->template_decl.arguments);
6949 gcc_assert (((lang_tree_node *)t)->template_decl.result);
6950 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6951 RT (DECL_CHAIN (t));
6952 break;
6953
6954 case TEMPLATE_INFO:
6955 RT (((lang_tree_node *)t)->template_info.tmpl);
6956 RT (((lang_tree_node *)t)->template_info.args);
6957 RT (((lang_tree_node *)t)->template_info.partial);
6958 if (unsigned len = u ())
6959 {
6960 auto &ac = (((lang_tree_node *)t)
6961 ->template_info.deferred_access_checks);
6962 vec_alloc (ac, len);
6963 for (unsigned ix = 0; ix != len; ix++)
6964 {
6965 deferred_access_check m;
6966
6967 RT (m.binfo);
6968 RT (m.decl);
6969 RT (m.diag_decl);
6970 m.loc = state->read_location (*this);
6971 ac->quick_push (m);
6972 }
6973 }
6974 break;
6975
6976 case TEMPLATE_PARM_INDEX:
6977 RU (((lang_tree_node *)t)->tpi.index);
6978 RU (((lang_tree_node *)t)->tpi.level);
6979 RU (((lang_tree_node *)t)->tpi.orig_level);
6980 RT (((lang_tree_node *)t)->tpi.decl);
6981 break;
6982
6983 case TRAIT_EXPR:
6984 RT (((lang_tree_node *)t)->trait_expression.type1);
6985 RT (((lang_tree_node *)t)->trait_expression.type2);
6986 RUC (cp_trait_kind, ((lang_tree_node *)t)->trait_expression.kind);
6987 break;
6988 }
6989
6990 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6991 {
6992 tree type = tree_node ();
6993
6994 if (type && code == ENUMERAL_TYPE && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6995 {
6996 unsigned precision = u ();
6997
6998 type = build_distinct_type_copy (type);
6999 TYPE_PRECISION (type) = precision;
7000 set_min_and_max_values_for_integral_type (type, precision,
7001 TYPE_SIGN (type));
7002 }
7003
7004 if (code != TEMPLATE_DECL)
7005 t->typed.type = type;
7006 }
7007
7008 if (TREE_CODE (t) == CONSTRUCTOR)
7009 if (unsigned len = u ())
7010 {
7011 vec_alloc (t->constructor.elts, len);
7012 for (unsigned ix = 0; ix != len; ix++)
7013 {
7014 constructor_elt elt;
7015
7016 RT (elt.index);
7017 RTU (elt.value);
7018 t->constructor.elts->quick_push (elt);
7019 }
7020 }
7021
7022 #undef RT
7023 #undef RM
7024 #undef RU
7025 return !get_overrun ();
7026 }
7027
7028 void
7029 trees_out::lang_decl_vals (tree t)
7030 {
7031 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
7032 #define WU(X) (u (X))
7033 #define WT(X) (tree_node (X))
7034 /* Module index already written. */
7035 switch (lang->u.base.selector)
7036 {
7037 default:
7038 gcc_unreachable ();
7039
7040 case lds_fn: /* lang_decl_fn. */
7041 if (streaming_p ())
7042 {
7043 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
7044 WU (lang->u.fn.ovl_op_code);
7045 }
7046
7047 if (DECL_CLASS_SCOPE_P (t))
7048 WT (lang->u.fn.context);
7049
7050 if (lang->u.fn.thunk_p)
7051 {
7052 /* The thunked-to function. */
7053 WT (lang->u.fn.befriending_classes);
7054 if (streaming_p ())
7055 wi (lang->u.fn.u5.fixed_offset);
7056 }
7057 else if (decl_tls_wrapper_p (t))
7058 /* The wrapped variable. */
7059 WT (lang->u.fn.befriending_classes);
7060 else
7061 WT (lang->u.fn.u5.cloned_function);
7062
7063 if (FNDECL_USED_AUTO (t))
7064 WT (lang->u.fn.u.saved_auto_return_type);
7065
7066 goto lds_min;
7067
7068 case lds_decomp: /* lang_decl_decomp. */
7069 WT (lang->u.decomp.base);
7070 goto lds_min;
7071
7072 case lds_min: /* lang_decl_min. */
7073 lds_min:
7074 WT (lang->u.min.template_info);
7075 {
7076 tree access = lang->u.min.access;
7077
7078 /* DECL_ACCESS needs to be maintained by the definition of the
7079 (derived) class that changes the access. The other users
7080 of DECL_ACCESS need to write it here. */
7081 if (!DECL_THUNK_P (t)
7082 && (DECL_CONTEXT (t) && TYPE_P (DECL_CONTEXT (t))))
7083 access = NULL_TREE;
7084
7085 WT (access);
7086 }
7087 break;
7088
7089 case lds_ns: /* lang_decl_ns. */
7090 break;
7091
7092 case lds_parm: /* lang_decl_parm. */
7093 if (streaming_p ())
7094 {
7095 WU (lang->u.parm.level);
7096 WU (lang->u.parm.index);
7097 }
7098 break;
7099 }
7100 #undef WU
7101 #undef WT
7102 }
7103
7104 bool
7105 trees_in::lang_decl_vals (tree t)
7106 {
7107 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
7108 #define RU(X) ((X) = u ())
7109 #define RT(X) ((X) = tree_node ())
7110
7111 /* Module index already read. */
7112 switch (lang->u.base.selector)
7113 {
7114 default:
7115 gcc_unreachable ();
7116
7117 case lds_fn: /* lang_decl_fn. */
7118 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
7119 {
7120 unsigned code = u ();
7121
7122 /* Check consistency. */
7123 if (code >= OVL_OP_MAX
7124 || (ovl_op_info[IDENTIFIER_ASSIGN_OP_P (DECL_NAME (t))][code]
7125 .ovl_op_code) == OVL_OP_ERROR_MARK)
7126 set_overrun ();
7127 else
7128 lang->u.fn.ovl_op_code = code;
7129 }
7130
7131 if (DECL_CLASS_SCOPE_P (t))
7132 RT (lang->u.fn.context);
7133
7134 if (lang->u.fn.thunk_p)
7135 {
7136 RT (lang->u.fn.befriending_classes);
7137 lang->u.fn.u5.fixed_offset = wi ();
7138 }
7139 else if (decl_tls_wrapper_p (t))
7140 RT (lang->u.fn.befriending_classes);
7141 else
7142 RT (lang->u.fn.u5.cloned_function);
7143
7144 if (FNDECL_USED_AUTO (t))
7145 RT (lang->u.fn.u.saved_auto_return_type);
7146 goto lds_min;
7147
7148 case lds_decomp: /* lang_decl_decomp. */
7149 RT (lang->u.decomp.base);
7150 goto lds_min;
7151
7152 case lds_min: /* lang_decl_min. */
7153 lds_min:
7154 RT (lang->u.min.template_info);
7155 RT (lang->u.min.access);
7156 break;
7157
7158 case lds_ns: /* lang_decl_ns. */
7159 break;
7160
7161 case lds_parm: /* lang_decl_parm. */
7162 RU (lang->u.parm.level);
7163 RU (lang->u.parm.index);
7164 break;
7165 }
7166 #undef RU
7167 #undef RT
7168 return !get_overrun ();
7169 }
7170
7171 /* Most of the value contents of lang_type is streamed in
7172 define_class. */
7173
7174 void
7175 trees_out::lang_type_vals (tree t)
7176 {
7177 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
7178 #define WU(X) (u (X))
7179 #define WT(X) (tree_node (X))
7180 if (streaming_p ())
7181 WU (lang->align);
7182 #undef WU
7183 #undef WT
7184 }
7185
7186 bool
7187 trees_in::lang_type_vals (tree t)
7188 {
7189 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
7190 #define RU(X) ((X) = u ())
7191 #define RT(X) ((X) = tree_node ())
7192 RU (lang->align);
7193 #undef RU
7194 #undef RT
7195 return !get_overrun ();
7196 }
7197
7198 /* Write out the bools of T, including information about any
7199 LANG_SPECIFIC information. Including allocation of any lang
7200 specific object. */
7201
7202 void
7203 trees_out::tree_node_bools (tree t)
7204 {
7205 gcc_checking_assert (streaming_p ());
7206
7207 /* We should never stream a namespace. */
7208 gcc_checking_assert (TREE_CODE (t) != NAMESPACE_DECL
7209 || DECL_NAMESPACE_ALIAS (t));
7210
7211 bits_out bits = stream_bits ();
7212 core_bools (t, bits);
7213
7214 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7215 {
7216 case tcc_declaration:
7217 {
7218 bool specific = DECL_LANG_SPECIFIC (t) != NULL;
7219 bits.b (specific);
7220 if (specific && VAR_P (t))
7221 bits.b (DECL_DECOMPOSITION_P (t));
7222 if (specific)
7223 lang_decl_bools (t, bits);
7224 }
7225 break;
7226
7227 case tcc_type:
7228 {
7229 bool specific = (TYPE_MAIN_VARIANT (t) == t
7230 && TYPE_LANG_SPECIFIC (t) != NULL);
7231 gcc_assert (TYPE_LANG_SPECIFIC (t)
7232 == TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t)));
7233
7234 bits.b (specific);
7235 if (specific)
7236 lang_type_bools (t, bits);
7237 }
7238 break;
7239
7240 default:
7241 break;
7242 }
7243
7244 bits.bflush ();
7245 }
7246
7247 bool
7248 trees_in::tree_node_bools (tree t)
7249 {
7250 bits_in bits = stream_bits ();
7251 bool ok = core_bools (t, bits);
7252
7253 if (ok)
7254 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7255 {
7256 case tcc_declaration:
7257 if (bits.b ())
7258 {
7259 bool decomp = VAR_P (t) && bits.b ();
7260
7261 ok = maybe_add_lang_decl_raw (t, decomp);
7262 if (ok)
7263 ok = lang_decl_bools (t, bits);
7264 }
7265 break;
7266
7267 case tcc_type:
7268 if (bits.b ())
7269 {
7270 ok = maybe_add_lang_type_raw (t);
7271 if (ok)
7272 ok = lang_type_bools (t, bits);
7273 }
7274 break;
7275
7276 default:
7277 break;
7278 }
7279
7280 bits.bflush ();
7281 if (!ok || get_overrun ())
7282 return false;
7283
7284 return true;
7285 }
7286
7287
7288 /* Write out the lang-specifc vals of node T. */
7289
7290 void
7291 trees_out::lang_vals (tree t)
7292 {
7293 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7294 {
7295 case tcc_declaration:
7296 if (DECL_LANG_SPECIFIC (t))
7297 lang_decl_vals (t);
7298 break;
7299
7300 case tcc_type:
7301 if (TYPE_MAIN_VARIANT (t) == t && TYPE_LANG_SPECIFIC (t))
7302 lang_type_vals (t);
7303 break;
7304
7305 default:
7306 break;
7307 }
7308 }
7309
7310 bool
7311 trees_in::lang_vals (tree t)
7312 {
7313 bool ok = true;
7314
7315 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7316 {
7317 case tcc_declaration:
7318 if (DECL_LANG_SPECIFIC (t))
7319 ok = lang_decl_vals (t);
7320 break;
7321
7322 case tcc_type:
7323 if (TYPE_LANG_SPECIFIC (t))
7324 ok = lang_type_vals (t);
7325 else
7326 TYPE_LANG_SPECIFIC (t) = TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t));
7327 break;
7328
7329 default:
7330 break;
7331 }
7332
7333 return ok;
7334 }
7335
7336 /* Write out the value fields of node T. */
7337
7338 void
7339 trees_out::tree_node_vals (tree t)
7340 {
7341 core_vals (t);
7342 lang_vals (t);
7343 }
7344
7345 bool
7346 trees_in::tree_node_vals (tree t)
7347 {
7348 bool ok = core_vals (t);
7349 if (ok)
7350 ok = lang_vals (t);
7351
7352 return ok;
7353 }
7354
7355
7356 /* If T is a back reference, fixed reference or NULL, write out its
7357 code and return WK_none. Otherwise return WK_value if we must write
7358 by value, or WK_normal otherwise. */
7359
7360 walk_kind
7361 trees_out::ref_node (tree t)
7362 {
7363 if (!t)
7364 {
7365 if (streaming_p ())
7366 {
7367 /* NULL_TREE -> tt_null. */
7368 null_count++;
7369 i (tt_null);
7370 }
7371 return WK_none;
7372 }
7373
7374 if (!TREE_VISITED (t))
7375 return WK_normal;
7376
7377 /* An already-visited tree. It must be in the map. */
7378 int val = get_tag (t);
7379
7380 if (val == tag_value)
7381 /* An entry we should walk into. */
7382 return WK_value;
7383
7384 const char *kind;
7385
7386 if (val <= tag_backref)
7387 {
7388 /* Back reference -> -ve number */
7389 if (streaming_p ())
7390 i (val);
7391 kind = "backref";
7392 }
7393 else if (val >= tag_fixed)
7394 {
7395 /* Fixed reference -> tt_fixed */
7396 val -= tag_fixed;
7397 if (streaming_p ())
7398 i (tt_fixed), u (val);
7399 kind = "fixed";
7400 }
7401
7402 if (streaming_p ())
7403 {
7404 back_ref_count++;
7405 dump (dumper::TREE)
7406 && dump ("Wrote %s:%d %C:%N%S", kind, val, TREE_CODE (t), t, t);
7407 }
7408 return WK_none;
7409 }
7410
7411 tree
7412 trees_in::back_ref (int tag)
7413 {
7414 tree res = NULL_TREE;
7415
7416 if (tag < 0 && unsigned (~tag) < back_refs.length ())
7417 res = back_refs[~tag];
7418
7419 if (!res
7420 /* Checking TREE_CODE is a dereference, so we know this is not a
7421 wild pointer. Checking the code provides evidence we've not
7422 corrupted something. */
7423 || TREE_CODE (res) >= MAX_TREE_CODES)
7424 set_overrun ();
7425 else
7426 dump (dumper::TREE) && dump ("Read backref:%d found %C:%N%S", tag,
7427 TREE_CODE (res), res, res);
7428 return res;
7429 }
7430
7431 unsigned
7432 trees_out::add_indirect_tpl_parms (tree parms)
7433 {
7434 unsigned len = 0;
7435 for (; parms; parms = TREE_CHAIN (parms), len++)
7436 {
7437 if (TREE_VISITED (parms))
7438 break;
7439
7440 int tag = insert (parms);
7441 if (streaming_p ())
7442 dump (dumper::TREE)
7443 && dump ("Indirect:%d template's parameter %u %C:%N",
7444 tag, len, TREE_CODE (parms), parms);
7445 }
7446
7447 if (streaming_p ())
7448 u (len);
7449
7450 return len;
7451 }
7452
7453 unsigned
7454 trees_in::add_indirect_tpl_parms (tree parms)
7455 {
7456 unsigned len = u ();
7457 for (unsigned ix = 0; ix != len; parms = TREE_CHAIN (parms), ix++)
7458 {
7459 int tag = insert (parms);
7460 dump (dumper::TREE)
7461 && dump ("Indirect:%d template's parameter %u %C:%N",
7462 tag, ix, TREE_CODE (parms), parms);
7463 }
7464
7465 return len;
7466 }
7467
7468 /* We've just found DECL by name. Insert nodes that come with it, but
7469 cannot be found by name, so we'll not accidentally walk into them. */
7470
7471 void
7472 trees_out::add_indirects (tree decl)
7473 {
7474 unsigned count = 0;
7475
7476 // FIXME:OPTIMIZATION We'll eventually want default fn parms of
7477 // templates and perhaps default template parms too. The former can
7478 // be referenced from instantiations (as they are lazily
7479 // instantiated). Also (deferred?) exception specifications of
7480 // templates. See the note about PARM_DECLs in trees_out::decl_node.
7481 tree inner = decl;
7482 if (TREE_CODE (decl) == TEMPLATE_DECL)
7483 {
7484 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7485
7486 inner = DECL_TEMPLATE_RESULT (decl);
7487 int tag = insert (inner);
7488 if (streaming_p ())
7489 dump (dumper::TREE)
7490 && dump ("Indirect:%d template's result %C:%N",
7491 tag, TREE_CODE (inner), inner);
7492 count++;
7493 }
7494
7495 if (TREE_CODE (inner) == TYPE_DECL)
7496 {
7497 /* Make sure the type is in the map too. Otherwise we get
7498 different RECORD_TYPEs for the same type, and things go
7499 south. */
7500 tree type = TREE_TYPE (inner);
7501 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7502 || TYPE_NAME (type) == inner);
7503 int tag = insert (type);
7504 if (streaming_p ())
7505 dump (dumper::TREE) && dump ("Indirect:%d decl's type %C:%N", tag,
7506 TREE_CODE (type), type);
7507 count++;
7508 }
7509
7510 if (streaming_p ())
7511 {
7512 u (count);
7513 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7514 }
7515 }
7516
7517 bool
7518 trees_in::add_indirects (tree decl)
7519 {
7520 unsigned count = 0;
7521
7522 tree inner = decl;
7523 if (TREE_CODE (inner) == TEMPLATE_DECL)
7524 {
7525 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7526
7527 inner = DECL_TEMPLATE_RESULT (decl);
7528 int tag = insert (inner);
7529 dump (dumper::TREE)
7530 && dump ("Indirect:%d templates's result %C:%N", tag,
7531 TREE_CODE (inner), inner);
7532 count++;
7533 }
7534
7535 if (TREE_CODE (inner) == TYPE_DECL)
7536 {
7537 tree type = TREE_TYPE (inner);
7538 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7539 || TYPE_NAME (type) == inner);
7540 int tag = insert (type);
7541 dump (dumper::TREE)
7542 && dump ("Indirect:%d decl's type %C:%N", tag, TREE_CODE (type), type);
7543 count++;
7544 }
7545
7546 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7547 return count == u ();
7548 }
7549
7550 /* Stream a template parameter. There are 4.5 kinds of parameter:
7551 a) Template - TEMPLATE_DECL->TYPE_DECL->TEMPLATE_TEMPLATE_PARM
7552 TEMPLATE_TYPE_PARM_INDEX TPI
7553 b) Type - TYPE_DECL->TEMPLATE_TYPE_PARM TEMPLATE_TYPE_PARM_INDEX TPI
7554 c.1) NonTYPE - PARM_DECL DECL_INITIAL TPI We meet this first
7555 c.2) NonTYPE - CONST_DECL DECL_INITIAL Same TPI
7556 d) BoundTemplate - TYPE_DECL->BOUND_TEMPLATE_TEMPLATE_PARM
7557 TEMPLATE_TYPE_PARM_INDEX->TPI
7558 TEMPLATE_TEMPLATE_PARM_INFO->TEMPLATE_INFO
7559
7560 All of these point to a TEMPLATE_PARM_INDEX, and #B also has a TEMPLATE_INFO
7561 */
7562
7563 void
7564 trees_out::tpl_parm_value (tree parm)
7565 {
7566 gcc_checking_assert (DECL_P (parm) && DECL_TEMPLATE_PARM_P (parm));
7567
7568 int parm_tag = insert (parm);
7569 if (streaming_p ())
7570 {
7571 i (tt_tpl_parm);
7572 dump (dumper::TREE) && dump ("Writing template parm:%d %C:%N",
7573 parm_tag, TREE_CODE (parm), parm);
7574 start (parm);
7575 tree_node_bools (parm);
7576 }
7577
7578 tree inner = parm;
7579 if (TREE_CODE (inner) == TEMPLATE_DECL)
7580 {
7581 inner = DECL_TEMPLATE_RESULT (inner);
7582 int inner_tag = insert (inner);
7583 if (streaming_p ())
7584 {
7585 dump (dumper::TREE) && dump ("Writing inner template parm:%d %C:%N",
7586 inner_tag, TREE_CODE (inner), inner);
7587 start (inner);
7588 tree_node_bools (inner);
7589 }
7590 }
7591
7592 tree type = NULL_TREE;
7593 if (TREE_CODE (inner) == TYPE_DECL)
7594 {
7595 type = TREE_TYPE (inner);
7596 int type_tag = insert (type);
7597 if (streaming_p ())
7598 {
7599 dump (dumper::TREE) && dump ("Writing template parm type:%d %C:%N",
7600 type_tag, TREE_CODE (type), type);
7601 start (type);
7602 tree_node_bools (type);
7603 }
7604 }
7605
7606 if (inner != parm)
7607 {
7608 /* This is a template-template parameter. */
7609 unsigned tpl_levels = 0;
7610 tpl_header (parm, &tpl_levels);
7611 tpl_parms_fini (parm, tpl_levels);
7612 }
7613
7614 tree_node_vals (parm);
7615 if (inner != parm)
7616 tree_node_vals (inner);
7617 if (type)
7618 {
7619 tree_node_vals (type);
7620 if (DECL_NAME (inner) == auto_identifier
7621 || DECL_NAME (inner) == decltype_auto_identifier)
7622 {
7623 /* Placeholder auto. */
7624 tree_node (DECL_INITIAL (inner));
7625 tree_node (DECL_SIZE_UNIT (inner));
7626 }
7627 }
7628
7629 if (streaming_p ())
7630 dump (dumper::TREE) && dump ("Wrote template parm:%d %C:%N",
7631 parm_tag, TREE_CODE (parm), parm);
7632 }
7633
7634 tree
7635 trees_in::tpl_parm_value ()
7636 {
7637 tree parm = start ();
7638 if (!parm || !tree_node_bools (parm))
7639 return NULL_TREE;
7640
7641 int parm_tag = insert (parm);
7642 dump (dumper::TREE) && dump ("Reading template parm:%d %C:%N",
7643 parm_tag, TREE_CODE (parm), parm);
7644
7645 tree inner = parm;
7646 if (TREE_CODE (inner) == TEMPLATE_DECL)
7647 {
7648 inner = start ();
7649 if (!inner || !tree_node_bools (inner))
7650 return NULL_TREE;
7651 int inner_tag = insert (inner);
7652 dump (dumper::TREE) && dump ("Reading inner template parm:%d %C:%N",
7653 inner_tag, TREE_CODE (inner), inner);
7654 DECL_TEMPLATE_RESULT (parm) = inner;
7655 }
7656
7657 tree type = NULL_TREE;
7658 if (TREE_CODE (inner) == TYPE_DECL)
7659 {
7660 type = start ();
7661 if (!type || !tree_node_bools (type))
7662 return NULL_TREE;
7663 int type_tag = insert (type);
7664 dump (dumper::TREE) && dump ("Reading template parm type:%d %C:%N",
7665 type_tag, TREE_CODE (type), type);
7666
7667 TREE_TYPE (inner) = TREE_TYPE (parm) = type;
7668 TYPE_NAME (type) = parm;
7669 }
7670
7671 if (inner != parm)
7672 {
7673 /* A template template parameter. */
7674 unsigned tpl_levels = 0;
7675 tpl_header (parm, &tpl_levels);
7676 tpl_parms_fini (parm, tpl_levels);
7677 }
7678
7679 tree_node_vals (parm);
7680 if (inner != parm)
7681 tree_node_vals (inner);
7682 if (type)
7683 {
7684 tree_node_vals (type);
7685 if (DECL_NAME (inner) == auto_identifier
7686 || DECL_NAME (inner) == decltype_auto_identifier)
7687 {
7688 /* Placeholder auto. */
7689 DECL_INITIAL (inner) = tree_node ();
7690 DECL_SIZE_UNIT (inner) = tree_node ();
7691 }
7692 if (TYPE_CANONICAL (type))
7693 {
7694 gcc_checking_assert (TYPE_CANONICAL (type) == type);
7695 TYPE_CANONICAL (type) = canonical_type_parameter (type);
7696 }
7697 }
7698
7699 dump (dumper::TREE) && dump ("Read template parm:%d %C:%N",
7700 parm_tag, TREE_CODE (parm), parm);
7701
7702 return parm;
7703 }
7704
7705 void
7706 trees_out::install_entity (tree decl, depset *dep)
7707 {
7708 gcc_checking_assert (streaming_p ());
7709
7710 /* Write the entity index, so we can insert it as soon as we
7711 know this is new. */
7712 u (dep ? dep->cluster + 1 : 0);
7713 if (CHECKING_P && dep)
7714 {
7715 /* Add it to the entity map, such that we can tell it is
7716 part of us. */
7717 bool existed;
7718 unsigned *slot = &entity_map->get_or_insert
7719 (DECL_UID (decl), &existed);
7720 if (existed)
7721 /* If it existed, it should match. */
7722 gcc_checking_assert (decl == (*entity_ary)[*slot]);
7723 *slot = ~dep->cluster;
7724 }
7725 }
7726
7727 bool
7728 trees_in::install_entity (tree decl)
7729 {
7730 unsigned entity_index = u ();
7731 if (!entity_index)
7732 return false;
7733
7734 if (entity_index > state->entity_num)
7735 {
7736 set_overrun ();
7737 return false;
7738 }
7739
7740 /* Insert the real decl into the entity ary. */
7741 unsigned ident = state->entity_lwm + entity_index - 1;
7742 (*entity_ary)[ident] = decl;
7743
7744 /* And into the entity map, if it's not already there. */
7745 tree not_tmpl = STRIP_TEMPLATE (decl);
7746 if (!DECL_LANG_SPECIFIC (not_tmpl)
7747 || !DECL_MODULE_ENTITY_P (not_tmpl))
7748 {
7749 retrofit_lang_decl (not_tmpl);
7750 DECL_MODULE_ENTITY_P (not_tmpl) = true;
7751
7752 /* Insert into the entity hash (it cannot already be there). */
7753 bool existed;
7754 unsigned &slot = entity_map->get_or_insert (DECL_UID (decl), &existed);
7755 gcc_checking_assert (!existed);
7756 slot = ident;
7757 }
7758 else if (state->is_partition ())
7759 {
7760 /* The decl is already in the entity map, but we see it again now from a
7761 partition: we want to overwrite if the original decl wasn't also from
7762 a (possibly different) partition. Otherwise, for things like template
7763 instantiations, make_dependency might not realise that this is also
7764 provided from a partition and should be considered part of this module
7765 (and thus always emitted into the primary interface's CMI). */
7766 unsigned *slot = entity_map->get (DECL_UID (decl));
7767 module_state *imp = import_entity_module (*slot);
7768 if (!imp->is_partition ())
7769 *slot = ident;
7770 }
7771
7772 return true;
7773 }
7774
7775 static bool has_definition (tree decl);
7776
7777 /* DECL is a decl node that must be written by value. DEP is the
7778 decl's depset. */
7779
7780 void
7781 trees_out::decl_value (tree decl, depset *dep)
7782 {
7783 /* We should not be writing clones or template parms. */
7784 gcc_checking_assert (DECL_P (decl)
7785 && !DECL_CLONED_FUNCTION_P (decl)
7786 && !DECL_TEMPLATE_PARM_P (decl));
7787
7788 /* We should never be writing non-typedef ptrmemfuncs by value. */
7789 gcc_checking_assert (TREE_CODE (decl) != TYPE_DECL
7790 || DECL_ORIGINAL_TYPE (decl)
7791 || !TYPE_PTRMEMFUNC_P (TREE_TYPE (decl)));
7792
7793 merge_kind mk = get_merge_kind (decl, dep);
7794
7795 if (CHECKING_P)
7796 {
7797 /* Never start in the middle of a template. */
7798 int use_tpl = -1;
7799 if (tree ti = node_template_info (decl, use_tpl))
7800 gcc_checking_assert (TREE_CODE (TI_TEMPLATE (ti)) == OVERLOAD
7801 || TREE_CODE (TI_TEMPLATE (ti)) == FIELD_DECL
7802 || (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti))
7803 != decl));
7804 }
7805
7806 if (streaming_p ())
7807 {
7808 /* A new node -> tt_decl. */
7809 decl_val_count++;
7810 i (tt_decl);
7811 u (mk);
7812 start (decl);
7813
7814 if (mk != MK_unique)
7815 {
7816 bits_out bits = stream_bits ();
7817 if (!(mk & MK_template_mask) && !state->is_header ())
7818 {
7819 /* Tell the importer whether this is a global module entity,
7820 or a module entity. */
7821 tree o = get_originating_module_decl (decl);
7822 bool is_attached = false;
7823
7824 tree not_tmpl = STRIP_TEMPLATE (o);
7825 if (DECL_LANG_SPECIFIC (not_tmpl)
7826 && DECL_MODULE_ATTACH_P (not_tmpl))
7827 is_attached = true;
7828
7829 /* But don't consider imported temploid friends as attached,
7830 since importers will need to merge this decl even if it was
7831 attached to a different module. */
7832 if (imported_temploid_friends->get (decl))
7833 is_attached = false;
7834
7835 bits.b (is_attached);
7836 }
7837 bits.b (dep && dep->has_defn ());
7838 }
7839 tree_node_bools (decl);
7840 }
7841
7842 int tag = insert (decl, WK_value);
7843 if (streaming_p ())
7844 dump (dumper::TREE)
7845 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], tag,
7846 TREE_CODE (decl), decl, decl);
7847
7848 tree inner = decl;
7849 int inner_tag = 0;
7850 if (TREE_CODE (decl) == TEMPLATE_DECL)
7851 {
7852 inner = DECL_TEMPLATE_RESULT (decl);
7853 inner_tag = insert (inner, WK_value);
7854
7855 if (streaming_p ())
7856 {
7857 int code = TREE_CODE (inner);
7858 u (code);
7859 start (inner, true);
7860 tree_node_bools (inner);
7861 dump (dumper::TREE)
7862 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], inner_tag,
7863 TREE_CODE (inner), inner, inner);
7864 }
7865 }
7866
7867 tree type = NULL_TREE;
7868 int type_tag = 0;
7869 tree stub_decl = NULL_TREE;
7870 int stub_tag = 0;
7871 if (TREE_CODE (inner) == TYPE_DECL)
7872 {
7873 type = TREE_TYPE (inner);
7874 bool has_type = (type == TYPE_MAIN_VARIANT (type)
7875 && TYPE_NAME (type) == inner);
7876
7877 if (streaming_p ())
7878 u (has_type ? TREE_CODE (type) : 0);
7879
7880 if (has_type)
7881 {
7882 type_tag = insert (type, WK_value);
7883 if (streaming_p ())
7884 {
7885 start (type, true);
7886 tree_node_bools (type);
7887 dump (dumper::TREE)
7888 && dump ("Writing type:%d %C:%N", type_tag,
7889 TREE_CODE (type), type);
7890 }
7891
7892 stub_decl = TYPE_STUB_DECL (type);
7893 bool has_stub = inner != stub_decl;
7894 if (streaming_p ())
7895 u (has_stub ? TREE_CODE (stub_decl) : 0);
7896 if (has_stub)
7897 {
7898 stub_tag = insert (stub_decl);
7899 if (streaming_p ())
7900 {
7901 start (stub_decl, true);
7902 tree_node_bools (stub_decl);
7903 dump (dumper::TREE)
7904 && dump ("Writing stub_decl:%d %C:%N", stub_tag,
7905 TREE_CODE (stub_decl), stub_decl);
7906 }
7907 }
7908 else
7909 stub_decl = NULL_TREE;
7910 }
7911 else
7912 /* Regular typedef. */
7913 type = NULL_TREE;
7914 }
7915
7916 /* Stream the container, we want it correctly canonicalized before
7917 we start emitting keys for this decl. */
7918 tree container = decl_container (decl);
7919
7920 unsigned tpl_levels = 0;
7921 if (decl != inner)
7922 tpl_header (decl, &tpl_levels);
7923 if (TREE_CODE (inner) == FUNCTION_DECL)
7924 fn_parms_init (inner);
7925
7926 /* Now write out the merging information, and then really
7927 install the tag values. */
7928 key_mergeable (tag, mk, decl, inner, container, dep);
7929
7930 if (streaming_p ())
7931 dump (dumper::MERGE)
7932 && dump ("Wrote:%d's %s merge key %C:%N", tag,
7933 merge_kind_name[mk], TREE_CODE (decl), decl);
7934
7935 if (TREE_CODE (inner) == FUNCTION_DECL)
7936 fn_parms_fini (inner);
7937
7938 if (!is_key_order ())
7939 tree_node_vals (decl);
7940
7941 if (inner_tag)
7942 {
7943 if (!is_key_order ())
7944 tree_node_vals (inner);
7945 tpl_parms_fini (decl, tpl_levels);
7946 }
7947
7948 if (type && !is_key_order ())
7949 {
7950 tree_node_vals (type);
7951 if (stub_decl)
7952 tree_node_vals (stub_decl);
7953 }
7954
7955 if (!is_key_order ())
7956 {
7957 if (mk & MK_template_mask
7958 || mk == MK_partial
7959 || mk == MK_friend_spec)
7960 {
7961 if (mk != MK_partial)
7962 {
7963 // FIXME: We should make use of the merge-key by
7964 // exposing it outside of key_mergeable. But this gets
7965 // the job done.
7966 auto *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
7967
7968 if (streaming_p ())
7969 u (get_mergeable_specialization_flags (entry->tmpl, decl));
7970 tree_node (entry->tmpl);
7971 tree_node (entry->args);
7972 }
7973 else
7974 {
7975 tree ti = get_template_info (inner);
7976 tree_node (TI_TEMPLATE (ti));
7977 tree_node (TI_ARGS (ti));
7978 }
7979 }
7980 tree_node (get_constraints (decl));
7981 }
7982
7983 if (streaming_p ())
7984 {
7985 /* Do not stray outside this section. */
7986 gcc_checking_assert (!dep || dep->section == dep_hash->section);
7987
7988 /* Write the entity index, so we can insert it as soon as we
7989 know this is new. */
7990 install_entity (decl, dep);
7991 }
7992
7993 if (DECL_LANG_SPECIFIC (inner)
7994 && DECL_MODULE_KEYED_DECLS_P (inner)
7995 && !is_key_order ())
7996 {
7997 /* Stream the keyed entities. */
7998 auto *attach_vec = keyed_table->get (inner);
7999 unsigned num = attach_vec->length ();
8000 if (streaming_p ())
8001 u (num);
8002 for (unsigned ix = 0; ix != num; ix++)
8003 {
8004 tree attached = (*attach_vec)[ix];
8005 tree_node (attached);
8006 if (streaming_p ())
8007 dump (dumper::MERGE)
8008 && dump ("Written %d[%u] attached decl %N", tag, ix, attached);
8009 }
8010 }
8011
8012 if (TREE_CODE (inner) == FUNCTION_DECL
8013 || TREE_CODE (inner) == TYPE_DECL)
8014 {
8015 /* Write imported temploid friends so that importers can reconstruct
8016 this information on stream-in. */
8017 tree* slot = imported_temploid_friends->get (decl);
8018 tree_node (slot ? *slot : NULL_TREE);
8019 }
8020
8021 bool is_typedef = false;
8022 if (!type && TREE_CODE (inner) == TYPE_DECL)
8023 {
8024 tree t = TREE_TYPE (inner);
8025 unsigned tdef_flags = 0;
8026 if (DECL_ORIGINAL_TYPE (inner)
8027 && TYPE_NAME (TREE_TYPE (inner)) == inner)
8028 {
8029 tdef_flags |= 1;
8030 if (TYPE_STRUCTURAL_EQUALITY_P (t)
8031 && TYPE_DEPENDENT_P_VALID (t)
8032 && TYPE_DEPENDENT_P (t))
8033 tdef_flags |= 2;
8034 }
8035 if (streaming_p ())
8036 u (tdef_flags);
8037
8038 if (tdef_flags & 1)
8039 {
8040 /* A typedef type. */
8041 int type_tag = insert (t);
8042 if (streaming_p ())
8043 dump (dumper::TREE)
8044 && dump ("Cloned:%d %s %C:%N", type_tag,
8045 tdef_flags & 2 ? "depalias" : "typedef",
8046 TREE_CODE (t), t);
8047
8048 is_typedef = true;
8049 }
8050 }
8051
8052 if (streaming_p () && DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
8053 {
8054 bool cloned_p
8055 = (DECL_CHAIN (decl) && DECL_CLONED_FUNCTION_P (DECL_CHAIN (decl)));
8056 bool needs_vtt_parm_p
8057 = (cloned_p && CLASSTYPE_VBASECLASSES (DECL_CONTEXT (decl)));
8058 bool omit_inherited_parms_p
8059 = (cloned_p && DECL_MAYBE_IN_CHARGE_CONSTRUCTOR_P (decl)
8060 && base_ctor_omit_inherited_parms (decl));
8061 unsigned flags = (int (cloned_p) << 0
8062 | int (needs_vtt_parm_p) << 1
8063 | int (omit_inherited_parms_p) << 2);
8064 u (flags);
8065 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
8066 decl, cloned_p ? "" : "not ");
8067 }
8068
8069 if (streaming_p () && VAR_P (decl) && CP_DECL_THREAD_LOCAL_P (decl))
8070 u (decl_tls_model (decl));
8071
8072 if (streaming_p ())
8073 dump (dumper::TREE) && dump ("Written decl:%d %C:%N", tag,
8074 TREE_CODE (decl), decl);
8075
8076 if (NAMESPACE_SCOPE_P (inner))
8077 gcc_checking_assert (!dep == (VAR_OR_FUNCTION_DECL_P (inner)
8078 && DECL_LOCAL_DECL_P (inner)));
8079 else if ((TREE_CODE (inner) == TYPE_DECL
8080 && !is_typedef
8081 && TYPE_NAME (TREE_TYPE (inner)) == inner)
8082 || TREE_CODE (inner) == FUNCTION_DECL)
8083 {
8084 bool write_defn = !dep && has_definition (decl);
8085 if (streaming_p ())
8086 u (write_defn);
8087 if (write_defn)
8088 write_definition (decl);
8089 }
8090 }
8091
8092 tree
8093 trees_in::decl_value ()
8094 {
8095 int tag = 0;
8096 bool is_attached = false;
8097 bool has_defn = false;
8098 unsigned mk_u = u ();
8099 if (mk_u >= MK_hwm || !merge_kind_name[mk_u])
8100 {
8101 set_overrun ();
8102 return NULL_TREE;
8103 }
8104
8105 unsigned saved_unused = unused;
8106 unused = 0;
8107
8108 merge_kind mk = merge_kind (mk_u);
8109
8110 tree decl = start ();
8111 if (decl)
8112 {
8113 if (mk != MK_unique)
8114 {
8115 bits_in bits = stream_bits ();
8116 if (!(mk & MK_template_mask) && !state->is_header ())
8117 is_attached = bits.b ();
8118
8119 has_defn = bits.b ();
8120 }
8121
8122 if (!tree_node_bools (decl))
8123 decl = NULL_TREE;
8124 }
8125
8126 /* Insert into map. */
8127 tag = insert (decl);
8128 if (decl)
8129 dump (dumper::TREE)
8130 && dump ("Reading:%d %C", tag, TREE_CODE (decl));
8131
8132 tree inner = decl;
8133 int inner_tag = 0;
8134 if (decl && TREE_CODE (decl) == TEMPLATE_DECL)
8135 {
8136 int code = u ();
8137 inner = start (code);
8138 if (inner && tree_node_bools (inner))
8139 DECL_TEMPLATE_RESULT (decl) = inner;
8140 else
8141 decl = NULL_TREE;
8142
8143 inner_tag = insert (inner);
8144 if (decl)
8145 dump (dumper::TREE)
8146 && dump ("Reading:%d %C", inner_tag, TREE_CODE (inner));
8147 }
8148
8149 tree type = NULL_TREE;
8150 int type_tag = 0;
8151 tree stub_decl = NULL_TREE;
8152 int stub_tag = 0;
8153 if (decl && TREE_CODE (inner) == TYPE_DECL)
8154 {
8155 if (unsigned type_code = u ())
8156 {
8157 type = start (type_code);
8158 if (type && tree_node_bools (type))
8159 {
8160 TREE_TYPE (inner) = type;
8161 TYPE_NAME (type) = inner;
8162 }
8163 else
8164 decl = NULL_TREE;
8165
8166 type_tag = insert (type);
8167 if (decl)
8168 dump (dumper::TREE)
8169 && dump ("Reading type:%d %C", type_tag, TREE_CODE (type));
8170
8171 if (unsigned stub_code = u ())
8172 {
8173 stub_decl = start (stub_code);
8174 if (stub_decl && tree_node_bools (stub_decl))
8175 {
8176 TREE_TYPE (stub_decl) = type;
8177 TYPE_STUB_DECL (type) = stub_decl;
8178 }
8179 else
8180 decl = NULL_TREE;
8181
8182 stub_tag = insert (stub_decl);
8183 if (decl)
8184 dump (dumper::TREE)
8185 && dump ("Reading stub_decl:%d %C", stub_tag,
8186 TREE_CODE (stub_decl));
8187 }
8188 }
8189 }
8190
8191 if (!decl)
8192 {
8193 bail:
8194 if (inner_tag != 0)
8195 back_refs[~inner_tag] = NULL_TREE;
8196 if (type_tag != 0)
8197 back_refs[~type_tag] = NULL_TREE;
8198 if (stub_tag != 0)
8199 back_refs[~stub_tag] = NULL_TREE;
8200 if (tag != 0)
8201 back_refs[~tag] = NULL_TREE;
8202 set_overrun ();
8203 /* Bail. */
8204 unused = saved_unused;
8205 return NULL_TREE;
8206 }
8207
8208 /* Read the container, to ensure it's already been streamed in. */
8209 tree container = decl_container ();
8210 unsigned tpl_levels = 0;
8211
8212 /* Figure out if this decl is already known about. */
8213 int parm_tag = 0;
8214
8215 if (decl != inner)
8216 if (!tpl_header (decl, &tpl_levels))
8217 goto bail;
8218 if (TREE_CODE (inner) == FUNCTION_DECL)
8219 parm_tag = fn_parms_init (inner);
8220
8221 tree existing = key_mergeable (tag, mk, decl, inner, type, container,
8222 is_attached);
8223 tree existing_inner = existing;
8224 if (existing)
8225 {
8226 if (existing == error_mark_node)
8227 goto bail;
8228
8229 if (TREE_CODE (STRIP_TEMPLATE (existing)) == TYPE_DECL)
8230 {
8231 tree etype = TREE_TYPE (existing);
8232 if (TYPE_LANG_SPECIFIC (etype)
8233 && COMPLETE_TYPE_P (etype)
8234 && !CLASSTYPE_MEMBER_VEC (etype))
8235 /* Give it a member vec, we're likely gonna be looking
8236 inside it. */
8237 set_class_bindings (etype, -1);
8238 }
8239
8240 /* Install the existing decl into the back ref array. */
8241 register_duplicate (decl, existing);
8242 back_refs[~tag] = existing;
8243 if (inner_tag != 0)
8244 {
8245 existing_inner = DECL_TEMPLATE_RESULT (existing);
8246 back_refs[~inner_tag] = existing_inner;
8247 }
8248
8249 if (type_tag != 0)
8250 {
8251 tree existing_type = TREE_TYPE (existing);
8252 back_refs[~type_tag] = existing_type;
8253 if (stub_tag != 0)
8254 back_refs[~stub_tag] = TYPE_STUB_DECL (existing_type);
8255 }
8256 }
8257
8258 if (parm_tag)
8259 fn_parms_fini (parm_tag, inner, existing_inner, has_defn);
8260
8261 if (!tree_node_vals (decl))
8262 goto bail;
8263
8264 if (inner_tag)
8265 {
8266 gcc_checking_assert (DECL_TEMPLATE_RESULT (decl) == inner);
8267
8268 if (!tree_node_vals (inner))
8269 goto bail;
8270
8271 if (!tpl_parms_fini (decl, tpl_levels))
8272 goto bail;
8273 }
8274
8275 if (type && (!tree_node_vals (type)
8276 || (stub_decl && !tree_node_vals (stub_decl))))
8277 goto bail;
8278
8279 spec_entry spec;
8280 unsigned spec_flags = 0;
8281 if (mk & MK_template_mask
8282 || mk == MK_partial
8283 || mk == MK_friend_spec)
8284 {
8285 if (mk == MK_partial)
8286 spec_flags = 2;
8287 else
8288 spec_flags = u ();
8289
8290 spec.tmpl = tree_node ();
8291 spec.args = tree_node ();
8292 }
8293 /* Hold constraints on the spec field, for a short while. */
8294 spec.spec = tree_node ();
8295
8296 dump (dumper::TREE) && dump ("Read:%d %C:%N", tag, TREE_CODE (decl), decl);
8297
8298 existing = back_refs[~tag];
8299 bool installed = install_entity (existing);
8300 bool is_new = existing == decl;
8301
8302 if (DECL_LANG_SPECIFIC (inner)
8303 && DECL_MODULE_KEYED_DECLS_P (inner))
8304 {
8305 /* Read and maybe install the attached entities. */
8306 bool existed;
8307 auto &set = keyed_table->get_or_insert (STRIP_TEMPLATE (existing),
8308 &existed);
8309 unsigned num = u ();
8310 if (is_new == existed)
8311 set_overrun ();
8312 if (is_new)
8313 set.reserve (num);
8314 for (unsigned ix = 0; !get_overrun () && ix != num; ix++)
8315 {
8316 tree attached = tree_node ();
8317 dump (dumper::MERGE)
8318 && dump ("Read %d[%u] %s attached decl %N", tag, ix,
8319 is_new ? "new" : "matched", attached);
8320 if (is_new)
8321 set.quick_push (attached);
8322 else if (set[ix] != attached)
8323 set_overrun ();
8324 }
8325 }
8326
8327 if (TREE_CODE (inner) == FUNCTION_DECL
8328 || TREE_CODE (inner) == TYPE_DECL)
8329 if (tree owner = tree_node ())
8330 if (is_new)
8331 imported_temploid_friends->put (decl, owner);
8332
8333 /* Regular typedefs will have a NULL TREE_TYPE at this point. */
8334 unsigned tdef_flags = 0;
8335 bool is_typedef = false;
8336 if (!type && TREE_CODE (inner) == TYPE_DECL)
8337 {
8338 tdef_flags = u ();
8339 if (tdef_flags & 1)
8340 is_typedef = true;
8341 }
8342
8343 if (is_new)
8344 {
8345 /* A newly discovered node. */
8346 if (TREE_CODE (decl) == FUNCTION_DECL && DECL_VIRTUAL_P (decl))
8347 /* Mark this identifier as naming a virtual function --
8348 lookup_overrides relies on this optimization. */
8349 IDENTIFIER_VIRTUAL_P (DECL_NAME (decl)) = true;
8350
8351 if (installed)
8352 {
8353 /* Mark the entity as imported. */
8354 retrofit_lang_decl (inner);
8355 DECL_MODULE_IMPORT_P (inner) = true;
8356 }
8357
8358 if (spec.spec)
8359 set_constraints (decl, spec.spec);
8360
8361 if (TREE_CODE (decl) == INTEGER_CST && !TREE_OVERFLOW (decl))
8362 {
8363 decl = cache_integer_cst (decl, true);
8364 back_refs[~tag] = decl;
8365 }
8366
8367 if (is_typedef)
8368 {
8369 /* Frob it to be ready for cloning. */
8370 TREE_TYPE (inner) = DECL_ORIGINAL_TYPE (inner);
8371 DECL_ORIGINAL_TYPE (inner) = NULL_TREE;
8372 set_underlying_type (inner);
8373 if (tdef_flags & 2)
8374 {
8375 /* Match instantiate_alias_template's handling. */
8376 tree type = TREE_TYPE (inner);
8377 TYPE_DEPENDENT_P (type) = true;
8378 TYPE_DEPENDENT_P_VALID (type) = true;
8379 SET_TYPE_STRUCTURAL_EQUALITY (type);
8380 }
8381 }
8382
8383 if (inner_tag)
8384 /* Set the TEMPLATE_DECL's type. */
8385 TREE_TYPE (decl) = TREE_TYPE (inner);
8386
8387 /* Redetermine whether we need to import or export this declaration
8388 for this TU. But for extern templates we know we must import:
8389 they'll be defined in a different TU.
8390 FIXME: How do dllexport and dllimport interact across a module?
8391 See also https://github.com/itanium-cxx-abi/cxx-abi/issues/170.
8392 May have to revisit? */
8393 if (type
8394 && CLASS_TYPE_P (type)
8395 && TYPE_LANG_SPECIFIC (type)
8396 && !(CLASSTYPE_EXPLICIT_INSTANTIATION (type)
8397 && CLASSTYPE_INTERFACE_KNOWN (type)
8398 && CLASSTYPE_INTERFACE_ONLY (type)))
8399 {
8400 CLASSTYPE_INTERFACE_ONLY (type) = false;
8401 CLASSTYPE_INTERFACE_UNKNOWN (type) = true;
8402 }
8403
8404 /* Add to specialization tables now that constraints etc are
8405 added. */
8406 if (mk == MK_partial)
8407 {
8408 bool is_type = TREE_CODE (inner) == TYPE_DECL;
8409 spec.spec = is_type ? type : inner;
8410 add_mergeable_specialization (!is_type, &spec, decl, spec_flags);
8411 }
8412 else if (mk & MK_template_mask)
8413 {
8414 bool is_type = !(mk & MK_tmpl_decl_mask);
8415 spec.spec = is_type ? type : mk & MK_tmpl_tmpl_mask ? inner : decl;
8416 add_mergeable_specialization (!is_type, &spec, decl, spec_flags);
8417 }
8418
8419 /* When making a CMI from a partition we're going to need to walk partial
8420 specializations again, so make sure they're tracked. */
8421 if (state->is_partition () && (spec_flags & 2))
8422 set_defining_module_for_partial_spec (inner);
8423
8424 if (NAMESPACE_SCOPE_P (decl)
8425 && (mk == MK_named || mk == MK_unique
8426 || mk == MK_enum || mk == MK_friend_spec)
8427 && !(VAR_OR_FUNCTION_DECL_P (decl) && DECL_LOCAL_DECL_P (decl)))
8428 add_module_namespace_decl (CP_DECL_CONTEXT (decl), decl);
8429
8430 if (DECL_ARTIFICIAL (decl)
8431 && TREE_CODE (decl) == FUNCTION_DECL
8432 && !DECL_TEMPLATE_INFO (decl)
8433 && DECL_CONTEXT (decl) && TYPE_P (DECL_CONTEXT (decl))
8434 && TYPE_SIZE (DECL_CONTEXT (decl))
8435 && !DECL_THUNK_P (decl))
8436 /* A new implicit member function, when the class is
8437 complete. This means the importee declared it, and
8438 we must now add it to the class. Note that implicit
8439 member fns of template instantiations do not themselves
8440 look like templates. */
8441 if (!install_implicit_member (inner))
8442 set_overrun ();
8443
8444 /* When importing a TLS wrapper from a header unit, we haven't
8445 actually emitted its definition yet. Remember it so we can
8446 do this later. */
8447 if (state->is_header ()
8448 && decl_tls_wrapper_p (decl))
8449 note_vague_linkage_fn (decl);
8450
8451 /* Setup aliases for the declaration. */
8452 if (tree alias = lookup_attribute ("alias", DECL_ATTRIBUTES (decl)))
8453 {
8454 alias = TREE_VALUE (TREE_VALUE (alias));
8455 alias = get_identifier (TREE_STRING_POINTER (alias));
8456 assemble_alias (decl, alias);
8457 }
8458 }
8459 else
8460 {
8461 /* DECL is the to-be-discarded decl. Its internal pointers will
8462 be to the EXISTING's structure. Frob it to point to its
8463 own other structures, so loading its definition will alter
8464 it, and not the existing decl. */
8465 dump (dumper::MERGE) && dump ("Deduping %N", existing);
8466
8467 if (inner_tag)
8468 DECL_TEMPLATE_RESULT (decl) = inner;
8469
8470 if (type)
8471 {
8472 /* Point at the to-be-discarded type & decl. */
8473 TYPE_NAME (type) = inner;
8474 TREE_TYPE (inner) = type;
8475
8476 TYPE_STUB_DECL (type) = stub_decl ? stub_decl : inner;
8477 if (stub_decl)
8478 TREE_TYPE (stub_decl) = type;
8479 }
8480
8481 if (inner_tag)
8482 /* Set the TEMPLATE_DECL's type. */
8483 TREE_TYPE (decl) = TREE_TYPE (inner);
8484
8485 if (!is_matching_decl (existing, decl, is_typedef))
8486 unmatched_duplicate (existing);
8487
8488 if (TREE_CODE (inner) == FUNCTION_DECL)
8489 {
8490 tree e_inner = STRIP_TEMPLATE (existing);
8491 for (auto parm = DECL_ARGUMENTS (inner);
8492 parm; parm = DECL_CHAIN (parm))
8493 DECL_CONTEXT (parm) = e_inner;
8494 }
8495
8496 /* And our result is the existing node. */
8497 decl = existing;
8498 }
8499
8500 if (mk == MK_friend_spec)
8501 {
8502 tree e = match_mergeable_specialization (true, &spec);
8503 if (!e)
8504 {
8505 spec.spec = inner;
8506 add_mergeable_specialization (true, &spec, decl, spec_flags);
8507 }
8508 else if (e != existing)
8509 set_overrun ();
8510 }
8511
8512 if (is_typedef)
8513 {
8514 /* Insert the type into the array now. */
8515 tag = insert (TREE_TYPE (decl));
8516 dump (dumper::TREE)
8517 && dump ("Cloned:%d typedef %C:%N",
8518 tag, TREE_CODE (TREE_TYPE (decl)), TREE_TYPE (decl));
8519 }
8520
8521 unused = saved_unused;
8522
8523 if (DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
8524 {
8525 unsigned flags = u ();
8526
8527 if (is_new)
8528 {
8529 bool cloned_p = flags & 1;
8530 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
8531 decl, cloned_p ? "" : "not ");
8532 if (cloned_p)
8533 build_cdtor_clones (decl, flags & 2, flags & 4,
8534 /* Update the member vec, if there is
8535 one (we're in a different cluster
8536 to the class defn). */
8537 CLASSTYPE_MEMBER_VEC (DECL_CONTEXT (decl)));
8538 }
8539 }
8540
8541 if (VAR_P (decl) && CP_DECL_THREAD_LOCAL_P (decl))
8542 {
8543 enum tls_model model = tls_model (u ());
8544 if (is_new)
8545 set_decl_tls_model (decl, model);
8546 }
8547
8548 if (!NAMESPACE_SCOPE_P (inner)
8549 && ((TREE_CODE (inner) == TYPE_DECL
8550 && !is_typedef
8551 && TYPE_NAME (TREE_TYPE (inner)) == inner)
8552 || TREE_CODE (inner) == FUNCTION_DECL)
8553 && u ())
8554 read_definition (decl);
8555
8556 return decl;
8557 }
8558
8559 /* DECL is an unnameable member of CTX. Return a suitable identifying
8560 index. */
8561
8562 static unsigned
8563 get_field_ident (tree ctx, tree decl)
8564 {
8565 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
8566 || !DECL_NAME (decl)
8567 || IDENTIFIER_ANON_P (DECL_NAME (decl)));
8568
8569 unsigned ix = 0;
8570 for (tree fields = TYPE_FIELDS (ctx);
8571 fields; fields = DECL_CHAIN (fields))
8572 {
8573 if (fields == decl)
8574 return ix;
8575
8576 if (DECL_CONTEXT (fields) == ctx
8577 && (TREE_CODE (fields) == USING_DECL
8578 || (TREE_CODE (fields) == FIELD_DECL
8579 && (!DECL_NAME (fields)
8580 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8581 /* Count this field. */
8582 ix++;
8583 }
8584 gcc_unreachable ();
8585 }
8586
8587 static tree
8588 lookup_field_ident (tree ctx, unsigned ix)
8589 {
8590 for (tree fields = TYPE_FIELDS (ctx);
8591 fields; fields = DECL_CHAIN (fields))
8592 if (DECL_CONTEXT (fields) == ctx
8593 && (TREE_CODE (fields) == USING_DECL
8594 || (TREE_CODE (fields) == FIELD_DECL
8595 && (!DECL_NAME (fields)
8596 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8597 if (!ix--)
8598 return fields;
8599
8600 return NULL_TREE;
8601 }
8602
8603 /* Reference DECL. REF indicates the walk kind we are performing.
8604 Return true if we should write this decl by value. */
8605
8606 bool
8607 trees_out::decl_node (tree decl, walk_kind ref)
8608 {
8609 gcc_checking_assert (DECL_P (decl) && !DECL_TEMPLATE_PARM_P (decl)
8610 && DECL_CONTEXT (decl));
8611
8612 if (ref == WK_value)
8613 {
8614 depset *dep = dep_hash->find_dependency (decl);
8615 decl_value (decl, dep);
8616 return false;
8617 }
8618
8619 switch (TREE_CODE (decl))
8620 {
8621 default:
8622 break;
8623
8624 case FUNCTION_DECL:
8625 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8626 break;
8627
8628 case RESULT_DECL:
8629 /* Unlike PARM_DECLs, RESULT_DECLs are only generated and
8630 referenced when we're inside the function itself. */
8631 return true;
8632
8633 case PARM_DECL:
8634 {
8635 if (streaming_p ())
8636 i (tt_parm);
8637 tree_node (DECL_CONTEXT (decl));
8638 if (streaming_p ())
8639 {
8640 /* That must have put this in the map. */
8641 walk_kind ref = ref_node (decl);
8642 if (ref != WK_none)
8643 // FIXME:OPTIMIZATION We can wander into bits of the
8644 // template this was instantiated from. For instance
8645 // deferred noexcept and default parms. Currently we'll
8646 // end up cloning those bits of tree. It would be nice
8647 // to reference those specific nodes. I think putting
8648 // those things in the map when we reference their
8649 // template by name. See the note in add_indirects.
8650 return true;
8651
8652 dump (dumper::TREE)
8653 && dump ("Wrote %s reference %N",
8654 TREE_CODE (decl) == PARM_DECL ? "parameter" : "result",
8655 decl);
8656 }
8657 }
8658 return false;
8659
8660 case IMPORTED_DECL:
8661 /* This describes a USING_DECL to the ME's debug machinery. It
8662 originates from the fortran FE, and has nothing to do with
8663 C++ modules. */
8664 return true;
8665
8666 case LABEL_DECL:
8667 return true;
8668
8669 case CONST_DECL:
8670 {
8671 /* If I end up cloning enum decls, implementing C++20 using
8672 E::v, this will need tweaking. */
8673 if (streaming_p ())
8674 i (tt_enum_decl);
8675 tree ctx = DECL_CONTEXT (decl);
8676 gcc_checking_assert (TREE_CODE (ctx) == ENUMERAL_TYPE);
8677 tree_node (ctx);
8678 tree_node (DECL_NAME (decl));
8679
8680 int tag = insert (decl);
8681 if (streaming_p ())
8682 dump (dumper::TREE)
8683 && dump ("Wrote enum decl:%d %C:%N", tag, TREE_CODE (decl), decl);
8684 return false;
8685 }
8686 break;
8687
8688 case USING_DECL:
8689 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
8690 break;
8691 /* FALLTHROUGH */
8692
8693 case FIELD_DECL:
8694 {
8695 if (streaming_p ())
8696 i (tt_data_member);
8697
8698 tree ctx = DECL_CONTEXT (decl);
8699 tree_node (ctx);
8700
8701 tree name = NULL_TREE;
8702
8703 if (TREE_CODE (decl) == USING_DECL)
8704 ;
8705 else
8706 {
8707 name = DECL_NAME (decl);
8708 if (name && IDENTIFIER_ANON_P (name))
8709 name = NULL_TREE;
8710 }
8711
8712 tree_node (name);
8713 if (!name && streaming_p ())
8714 {
8715 unsigned ix = get_field_ident (ctx, decl);
8716 u (ix);
8717 }
8718
8719 int tag = insert (decl);
8720 if (streaming_p ())
8721 dump (dumper::TREE)
8722 && dump ("Wrote member:%d %C:%N", tag, TREE_CODE (decl), decl);
8723 return false;
8724 }
8725 break;
8726
8727 case VAR_DECL:
8728 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8729 if (DECL_VTABLE_OR_VTT_P (decl))
8730 {
8731 /* VTT or VTABLE, they are all on the vtables list. */
8732 tree ctx = CP_DECL_CONTEXT (decl);
8733 tree vtable = CLASSTYPE_VTABLES (ctx);
8734 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
8735 if (vtable == decl)
8736 {
8737 gcc_checking_assert (DECL_VIRTUAL_P (decl));
8738 if (streaming_p ())
8739 {
8740 u (tt_vtable);
8741 u (ix);
8742 dump (dumper::TREE)
8743 && dump ("Writing vtable %N[%u]", ctx, ix);
8744 }
8745 tree_node (ctx);
8746 return false;
8747 }
8748 gcc_unreachable ();
8749 }
8750
8751 if (DECL_TINFO_P (decl))
8752 {
8753 tinfo:
8754 /* A typeinfo, tt_tinfo_typedef or tt_tinfo_var. */
8755 bool is_var = VAR_P (decl);
8756 tree type = TREE_TYPE (decl);
8757 unsigned ix = get_pseudo_tinfo_index (type);
8758 if (streaming_p ())
8759 {
8760 i (is_var ? tt_tinfo_var : tt_tinfo_typedef);
8761 u (ix);
8762 }
8763
8764 if (is_var)
8765 {
8766 /* We also need the type it is for and mangled name, so
8767 the reader doesn't need to complete the type (which
8768 would break section ordering). The type it is for is
8769 stashed on the name's TREE_TYPE. */
8770 tree name = DECL_NAME (decl);
8771 tree_node (name);
8772 type = TREE_TYPE (name);
8773 tree_node (type);
8774 }
8775
8776 int tag = insert (decl);
8777 if (streaming_p ())
8778 dump (dumper::TREE)
8779 && dump ("Wrote tinfo_%s:%d %u %N", is_var ? "var" : "type",
8780 tag, ix, type);
8781
8782 if (!is_var)
8783 {
8784 tag = insert (type);
8785 if (streaming_p ())
8786 dump (dumper::TREE)
8787 && dump ("Wrote tinfo_type:%d %u %N", tag, ix, type);
8788 }
8789 return false;
8790 }
8791
8792 if (DECL_NTTP_OBJECT_P (decl))
8793 {
8794 /* A NTTP parm object. */
8795 if (streaming_p ())
8796 i (tt_nttp_var);
8797 tree_node (tparm_object_argument (decl));
8798 tree_node (DECL_NAME (decl));
8799 int tag = insert (decl);
8800 if (streaming_p ())
8801 dump (dumper::TREE)
8802 && dump ("Wrote nttp object:%d %N", tag, DECL_NAME (decl));
8803 return false;
8804 }
8805
8806 break;
8807
8808 case TYPE_DECL:
8809 if (DECL_TINFO_P (decl))
8810 goto tinfo;
8811 break;
8812 }
8813
8814 if (DECL_THUNK_P (decl))
8815 {
8816 /* Thunks are similar to binfos -- write the thunked-to decl and
8817 then thunk-specific key info. */
8818 if (streaming_p ())
8819 {
8820 i (tt_thunk);
8821 i (THUNK_FIXED_OFFSET (decl));
8822 }
8823
8824 tree target = decl;
8825 while (DECL_THUNK_P (target))
8826 target = THUNK_TARGET (target);
8827 tree_node (target);
8828 tree_node (THUNK_VIRTUAL_OFFSET (decl));
8829 int tag = insert (decl);
8830 if (streaming_p ())
8831 dump (dumper::TREE)
8832 && dump ("Wrote:%d thunk %N to %N", tag, DECL_NAME (decl), target);
8833 return false;
8834 }
8835
8836 if (DECL_CLONED_FUNCTION_P (decl))
8837 {
8838 tree target = get_clone_target (decl);
8839 if (streaming_p ())
8840 i (tt_clone_ref);
8841
8842 tree_node (target);
8843 tree_node (DECL_NAME (decl));
8844 if (DECL_VIRTUAL_P (decl))
8845 tree_node (DECL_VINDEX (decl));
8846 int tag = insert (decl);
8847 if (streaming_p ())
8848 dump (dumper::TREE)
8849 && dump ("Wrote:%d clone %N of %N", tag, DECL_NAME (decl), target);
8850 return false;
8851 }
8852
8853 /* Everything left should be a thing that is in the entity table.
8854 Mostly things that can be defined outside of their (original
8855 declaration) context. */
8856 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
8857 || VAR_P (decl)
8858 || TREE_CODE (decl) == FUNCTION_DECL
8859 || TREE_CODE (decl) == TYPE_DECL
8860 || TREE_CODE (decl) == USING_DECL
8861 || TREE_CODE (decl) == CONCEPT_DECL
8862 || TREE_CODE (decl) == NAMESPACE_DECL);
8863
8864 int use_tpl = -1;
8865 tree ti = node_template_info (decl, use_tpl);
8866 tree tpl = NULL_TREE;
8867
8868 /* If this is the TEMPLATE_DECL_RESULT of a TEMPLATE_DECL, get the
8869 TEMPLATE_DECL. Note TI_TEMPLATE is not a TEMPLATE_DECL for
8870 (some) friends, so we need to check that. */
8871 // FIXME: Should local friend template specializations be by value?
8872 // They don't get idents so we'll never know they're imported, but I
8873 // think we can only reach them from the TU that defines the
8874 // befriending class?
8875 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
8876 && DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == decl)
8877 {
8878 tpl = TI_TEMPLATE (ti);
8879 partial_template:
8880 if (streaming_p ())
8881 {
8882 i (tt_template);
8883 dump (dumper::TREE)
8884 && dump ("Writing implicit template %C:%N%S",
8885 TREE_CODE (tpl), tpl, tpl);
8886 }
8887 tree_node (tpl);
8888
8889 /* Streaming TPL caused us to visit DECL and maybe its type. */
8890 gcc_checking_assert (TREE_VISITED (decl));
8891 if (DECL_IMPLICIT_TYPEDEF_P (decl))
8892 gcc_checking_assert (TREE_VISITED (TREE_TYPE (decl)));
8893 return false;
8894 }
8895
8896 tree ctx = CP_DECL_CONTEXT (decl);
8897 depset *dep = NULL;
8898 if (streaming_p ())
8899 dep = dep_hash->find_dependency (decl);
8900 else if (TREE_CODE (ctx) != FUNCTION_DECL
8901 || TREE_CODE (decl) == TEMPLATE_DECL
8902 || DECL_IMPLICIT_TYPEDEF_P (decl)
8903 || (DECL_LANG_SPECIFIC (decl)
8904 && DECL_MODULE_IMPORT_P (decl)))
8905 {
8906 auto kind = (TREE_CODE (decl) == NAMESPACE_DECL
8907 && !DECL_NAMESPACE_ALIAS (decl)
8908 ? depset::EK_NAMESPACE : depset::EK_DECL);
8909 dep = dep_hash->add_dependency (decl, kind);
8910 }
8911
8912 if (!dep)
8913 {
8914 /* Some internal entity of context. Do by value. */
8915 decl_value (decl, NULL);
8916 return false;
8917 }
8918
8919 if (dep->get_entity_kind () == depset::EK_REDIRECT)
8920 {
8921 /* The DECL_TEMPLATE_RESULT of a partial specialization.
8922 Write the partial specialization's template. */
8923 depset *redirect = dep->deps[0];
8924 gcc_checking_assert (redirect->get_entity_kind () == depset::EK_PARTIAL);
8925 tpl = redirect->get_entity ();
8926 goto partial_template;
8927 }
8928
8929 if (streaming_p ())
8930 {
8931 /* Locate the entity. */
8932 unsigned index = dep->cluster;
8933 unsigned import = 0;
8934
8935 if (dep->is_import ())
8936 import = dep->section;
8937 else if (CHECKING_P)
8938 /* It should be what we put there. */
8939 gcc_checking_assert (index == ~import_entity_index (decl));
8940
8941 #if CHECKING_P
8942 gcc_assert (!import || importedness >= 0);
8943 #endif
8944 i (tt_entity);
8945 u (import);
8946 u (index);
8947 }
8948
8949 int tag = insert (decl);
8950 if (streaming_p () && dump (dumper::TREE))
8951 {
8952 char const *kind = "import";
8953 module_state *from = (*modules)[0];
8954 if (dep->is_import ())
8955 /* Rediscover the unremapped index. */
8956 from = import_entity_module (import_entity_index (decl));
8957 else
8958 {
8959 tree o = get_originating_module_decl (decl);
8960 o = STRIP_TEMPLATE (o);
8961 kind = (DECL_LANG_SPECIFIC (o) && DECL_MODULE_PURVIEW_P (o)
8962 ? "purview" : "GMF");
8963 }
8964 dump ("Wrote %s:%d %C:%N@%M", kind,
8965 tag, TREE_CODE (decl), decl, from);
8966 }
8967
8968 add_indirects (decl);
8969
8970 return false;
8971 }
8972
8973 void
8974 trees_out::type_node (tree type)
8975 {
8976 gcc_assert (TYPE_P (type));
8977
8978 tree root = (TYPE_NAME (type)
8979 ? TREE_TYPE (TYPE_NAME (type)) : TYPE_MAIN_VARIANT (type));
8980
8981 if (type != root)
8982 {
8983 if (streaming_p ())
8984 i (tt_variant_type);
8985 tree_node (root);
8986
8987 int flags = -1;
8988
8989 if (TREE_CODE (type) == FUNCTION_TYPE
8990 || TREE_CODE (type) == METHOD_TYPE)
8991 {
8992 int quals = type_memfn_quals (type);
8993 int rquals = type_memfn_rqual (type);
8994 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8995 bool late = TYPE_HAS_LATE_RETURN_TYPE (type);
8996
8997 if (raises != TYPE_RAISES_EXCEPTIONS (root)
8998 || rquals != type_memfn_rqual (root)
8999 || quals != type_memfn_quals (root)
9000 || late != TYPE_HAS_LATE_RETURN_TYPE (root))
9001 flags = rquals | (int (late) << 2) | (quals << 3);
9002 }
9003 else
9004 {
9005 if (TYPE_USER_ALIGN (type))
9006 flags = TYPE_ALIGN_RAW (type);
9007 }
9008
9009 if (streaming_p ())
9010 i (flags);
9011
9012 if (flags < 0)
9013 ;
9014 else if (TREE_CODE (type) == FUNCTION_TYPE
9015 || TREE_CODE (type) == METHOD_TYPE)
9016 {
9017 tree raises = TYPE_RAISES_EXCEPTIONS (type);
9018 if (raises == TYPE_RAISES_EXCEPTIONS (root))
9019 raises = error_mark_node;
9020 tree_node (raises);
9021 }
9022
9023 tree_node (TYPE_ATTRIBUTES (type));
9024
9025 if (streaming_p ())
9026 {
9027 /* Qualifiers. */
9028 int rquals = cp_type_quals (root);
9029 int quals = cp_type_quals (type);
9030 if (quals == rquals)
9031 quals = -1;
9032 i (quals);
9033 }
9034
9035 if (ref_node (type) != WK_none)
9036 {
9037 int tag = insert (type);
9038 if (streaming_p ())
9039 {
9040 i (0);
9041 dump (dumper::TREE)
9042 && dump ("Wrote:%d variant type %C", tag, TREE_CODE (type));
9043 }
9044 }
9045 return;
9046 }
9047
9048 if (tree name = TYPE_NAME (type))
9049 if ((TREE_CODE (name) == TYPE_DECL && DECL_ORIGINAL_TYPE (name))
9050 || DECL_TEMPLATE_PARM_P (name)
9051 || TREE_CODE (type) == RECORD_TYPE
9052 || TREE_CODE (type) == UNION_TYPE
9053 || TREE_CODE (type) == ENUMERAL_TYPE)
9054 {
9055 /* We can meet template parms that we didn't meet in the
9056 tpl_parms walk, because we're referring to a derived type
9057 that was previously constructed from equivalent template
9058 parms. */
9059 if (streaming_p ())
9060 {
9061 i (tt_typedef_type);
9062 dump (dumper::TREE)
9063 && dump ("Writing %stypedef %C:%N",
9064 DECL_IMPLICIT_TYPEDEF_P (name) ? "implicit " : "",
9065 TREE_CODE (name), name);
9066 }
9067 tree_node (name);
9068 if (streaming_p ())
9069 dump (dumper::TREE) && dump ("Wrote typedef %C:%N%S",
9070 TREE_CODE (name), name, name);
9071 gcc_checking_assert (TREE_VISITED (type));
9072 return;
9073 }
9074
9075 if (TYPE_PTRMEMFUNC_P (type))
9076 {
9077 /* This is a distinct type node, masquerading as a structure. */
9078 tree fn_type = TYPE_PTRMEMFUNC_FN_TYPE (type);
9079 if (streaming_p ())
9080 i (tt_ptrmem_type);
9081 tree_node (fn_type);
9082 int tag = insert (type);
9083 if (streaming_p ())
9084 dump (dumper::TREE) && dump ("Written:%d ptrmem type", tag);
9085 return;
9086 }
9087
9088 if (streaming_p ())
9089 {
9090 u (tt_derived_type);
9091 u (TREE_CODE (type));
9092 }
9093
9094 tree_node (TREE_TYPE (type));
9095 switch (TREE_CODE (type))
9096 {
9097 default:
9098 /* We should never meet a type here that is indescribable in
9099 terms of other types. */
9100 gcc_unreachable ();
9101
9102 case ARRAY_TYPE:
9103 tree_node (TYPE_DOMAIN (type));
9104 if (streaming_p ())
9105 /* Dependent arrays are constructed with TYPE_DEPENENT_P
9106 already set. */
9107 u (TYPE_DEPENDENT_P (type));
9108 break;
9109
9110 case COMPLEX_TYPE:
9111 /* No additional data. */
9112 break;
9113
9114 case BOOLEAN_TYPE:
9115 /* A non-standard boolean type. */
9116 if (streaming_p ())
9117 u (TYPE_PRECISION (type));
9118 break;
9119
9120 case INTEGER_TYPE:
9121 if (TREE_TYPE (type))
9122 {
9123 /* A range type (representing an array domain). */
9124 tree_node (TYPE_MIN_VALUE (type));
9125 tree_node (TYPE_MAX_VALUE (type));
9126 }
9127 else
9128 {
9129 /* A new integral type (representing a bitfield). */
9130 if (streaming_p ())
9131 {
9132 unsigned prec = TYPE_PRECISION (type);
9133 bool unsigned_p = TYPE_UNSIGNED (type);
9134
9135 u ((prec << 1) | unsigned_p);
9136 }
9137 }
9138 break;
9139
9140 case METHOD_TYPE:
9141 case FUNCTION_TYPE:
9142 {
9143 gcc_checking_assert (type_memfn_rqual (type) == REF_QUAL_NONE);
9144
9145 tree arg_types = TYPE_ARG_TYPES (type);
9146 if (TREE_CODE (type) == METHOD_TYPE)
9147 {
9148 tree_node (TREE_TYPE (TREE_VALUE (arg_types)));
9149 arg_types = TREE_CHAIN (arg_types);
9150 }
9151 tree_node (arg_types);
9152 }
9153 break;
9154
9155 case OFFSET_TYPE:
9156 tree_node (TYPE_OFFSET_BASETYPE (type));
9157 break;
9158
9159 case POINTER_TYPE:
9160 /* No additional data. */
9161 break;
9162
9163 case REFERENCE_TYPE:
9164 if (streaming_p ())
9165 u (TYPE_REF_IS_RVALUE (type));
9166 break;
9167
9168 case DECLTYPE_TYPE:
9169 case TYPEOF_TYPE:
9170 case DEPENDENT_OPERATOR_TYPE:
9171 tree_node (TYPE_VALUES_RAW (type));
9172 if (TREE_CODE (type) == DECLTYPE_TYPE)
9173 /* We stash a whole bunch of things into decltype's
9174 flags. */
9175 if (streaming_p ())
9176 tree_node_bools (type);
9177 break;
9178
9179 case TRAIT_TYPE:
9180 tree_node (TRAIT_TYPE_KIND_RAW (type));
9181 tree_node (TRAIT_TYPE_TYPE1 (type));
9182 tree_node (TRAIT_TYPE_TYPE2 (type));
9183 break;
9184
9185 case TYPE_ARGUMENT_PACK:
9186 /* No additional data. */
9187 break;
9188
9189 case TYPE_PACK_EXPANSION:
9190 if (streaming_p ())
9191 u (PACK_EXPANSION_LOCAL_P (type));
9192 tree_node (PACK_EXPANSION_PARAMETER_PACKS (type));
9193 tree_node (PACK_EXPANSION_EXTRA_ARGS (type));
9194 break;
9195
9196 case TYPENAME_TYPE:
9197 {
9198 tree_node (TYPE_CONTEXT (type));
9199 tree_node (DECL_NAME (TYPE_NAME (type)));
9200 tree_node (TYPENAME_TYPE_FULLNAME (type));
9201 if (streaming_p ())
9202 {
9203 enum tag_types tag_type = none_type;
9204 if (TYPENAME_IS_ENUM_P (type))
9205 tag_type = enum_type;
9206 else if (TYPENAME_IS_CLASS_P (type))
9207 tag_type = class_type;
9208 u (int (tag_type));
9209 }
9210 }
9211 break;
9212
9213 case UNBOUND_CLASS_TEMPLATE:
9214 {
9215 tree decl = TYPE_NAME (type);
9216 tree_node (DECL_CONTEXT (decl));
9217 tree_node (DECL_NAME (decl));
9218 tree_node (DECL_TEMPLATE_PARMS (decl));
9219 }
9220 break;
9221
9222 case VECTOR_TYPE:
9223 if (streaming_p ())
9224 {
9225 poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type);
9226 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
9227 wu (nunits.coeffs[ix]);
9228 }
9229 break;
9230 }
9231
9232 /* We may have met the type during emitting the above. */
9233 if (ref_node (type) != WK_none)
9234 {
9235 int tag = insert (type);
9236 if (streaming_p ())
9237 {
9238 i (0);
9239 dump (dumper::TREE)
9240 && dump ("Wrote:%d derived type %C", tag, TREE_CODE (type));
9241 }
9242 }
9243
9244 return;
9245 }
9246
9247 /* T is (mostly*) a non-mergeable node that must be written by value.
9248 The mergeable case is a BINFO, which are as-if DECLSs. */
9249
9250 void
9251 trees_out::tree_value (tree t)
9252 {
9253 /* We should never be writing a type by value. tree_type should
9254 have streamed it, or we're going via its TYPE_DECL. */
9255 gcc_checking_assert (!TYPE_P (t));
9256
9257 if (DECL_P (t))
9258 /* No template, type, var or function, except anonymous
9259 non-context vars. */
9260 gcc_checking_assert ((TREE_CODE (t) != TEMPLATE_DECL
9261 && TREE_CODE (t) != TYPE_DECL
9262 && (TREE_CODE (t) != VAR_DECL
9263 || (!DECL_NAME (t) && !DECL_CONTEXT (t)))
9264 && TREE_CODE (t) != FUNCTION_DECL));
9265
9266 if (streaming_p ())
9267 {
9268 /* A new node -> tt_node. */
9269 tree_val_count++;
9270 i (tt_node);
9271 start (t);
9272 tree_node_bools (t);
9273 }
9274
9275 if (TREE_CODE (t) == TREE_BINFO)
9276 /* Binfos are decl-like and need merging information. */
9277 binfo_mergeable (t);
9278
9279 int tag = insert (t, WK_value);
9280 if (streaming_p ())
9281 dump (dumper::TREE)
9282 && dump ("Writing tree:%d %C:%N", tag, TREE_CODE (t), t);
9283
9284 tree_node_vals (t);
9285
9286 if (streaming_p ())
9287 dump (dumper::TREE) && dump ("Written tree:%d %C:%N", tag, TREE_CODE (t), t);
9288 }
9289
9290 tree
9291 trees_in::tree_value ()
9292 {
9293 tree t = start ();
9294 if (!t || !tree_node_bools (t))
9295 return NULL_TREE;
9296
9297 tree existing = t;
9298 if (TREE_CODE (t) == TREE_BINFO)
9299 {
9300 tree type;
9301 unsigned ix = binfo_mergeable (&type);
9302 if (TYPE_BINFO (type))
9303 {
9304 /* We already have a definition, this must be a duplicate. */
9305 dump (dumper::MERGE)
9306 && dump ("Deduping binfo %N[%u]", type, ix);
9307 existing = TYPE_BINFO (type);
9308 while (existing && ix--)
9309 existing = TREE_CHAIN (existing);
9310 if (existing)
9311 register_duplicate (t, existing);
9312 else
9313 /* Error, mismatch -- diagnose in read_class_def's
9314 checking. */
9315 existing = t;
9316 }
9317 }
9318
9319 /* Insert into map. */
9320 int tag = insert (existing);
9321 dump (dumper::TREE)
9322 && dump ("Reading tree:%d %C", tag, TREE_CODE (t));
9323
9324 if (!tree_node_vals (t))
9325 {
9326 back_refs[~tag] = NULL_TREE;
9327 set_overrun ();
9328 /* Bail. */
9329 return NULL_TREE;
9330 }
9331
9332 if (TREE_CODE (t) == LAMBDA_EXPR
9333 && CLASSTYPE_LAMBDA_EXPR (TREE_TYPE (t)))
9334 {
9335 existing = CLASSTYPE_LAMBDA_EXPR (TREE_TYPE (t));
9336 back_refs[~tag] = existing;
9337 }
9338
9339 dump (dumper::TREE) && dump ("Read tree:%d %C:%N", tag, TREE_CODE (t), t);
9340
9341 if (TREE_CODE (existing) == INTEGER_CST && !TREE_OVERFLOW (existing))
9342 {
9343 existing = cache_integer_cst (t, true);
9344 back_refs[~tag] = existing;
9345 }
9346
9347 return existing;
9348 }
9349
9350 /* Stream out tree node T. We automatically create local back
9351 references, which is essentially a single pass lisp
9352 self-referential structure pretty-printer. */
9353
9354 void
9355 trees_out::tree_node (tree t)
9356 {
9357 dump.indent ();
9358 walk_kind ref = ref_node (t);
9359 if (ref == WK_none)
9360 goto done;
9361
9362 if (ref != WK_normal)
9363 goto skip_normal;
9364
9365 if (TREE_CODE (t) == IDENTIFIER_NODE)
9366 {
9367 /* An identifier node -> tt_id, tt_conv_id, tt_anon_id, tt_lambda_id. */
9368 int code = tt_id;
9369 if (IDENTIFIER_ANON_P (t))
9370 code = IDENTIFIER_LAMBDA_P (t) ? tt_lambda_id : tt_anon_id;
9371 else if (IDENTIFIER_CONV_OP_P (t))
9372 code = tt_conv_id;
9373
9374 if (streaming_p ())
9375 i (code);
9376
9377 if (code == tt_conv_id)
9378 {
9379 tree type = TREE_TYPE (t);
9380 gcc_checking_assert (type || t == conv_op_identifier);
9381 tree_node (type);
9382 }
9383 else if (code == tt_id && streaming_p ())
9384 str (IDENTIFIER_POINTER (t), IDENTIFIER_LENGTH (t));
9385
9386 int tag = insert (t);
9387 if (streaming_p ())
9388 {
9389 /* We know the ordering of the 4 id tags. */
9390 static const char *const kinds[] =
9391 {"", "conv_op ", "anon ", "lambda "};
9392 dump (dumper::TREE)
9393 && dump ("Written:%d %sidentifier:%N", tag,
9394 kinds[code - tt_id],
9395 code == tt_conv_id ? TREE_TYPE (t) : t);
9396 }
9397 goto done;
9398 }
9399
9400 if (TREE_CODE (t) == TREE_BINFO)
9401 {
9402 /* A BINFO -> tt_binfo.
9403 We must do this by reference. We stream the binfo tree
9404 itself when streaming its owning RECORD_TYPE. That we got
9405 here means the dominating type is not in this SCC. */
9406 if (streaming_p ())
9407 i (tt_binfo);
9408 binfo_mergeable (t);
9409 gcc_checking_assert (!TREE_VISITED (t));
9410 int tag = insert (t);
9411 if (streaming_p ())
9412 dump (dumper::TREE) && dump ("Inserting binfo:%d %N", tag, t);
9413 goto done;
9414 }
9415
9416 if (TREE_CODE (t) == INTEGER_CST
9417 && !TREE_OVERFLOW (t)
9418 && TREE_CODE (TREE_TYPE (t)) == ENUMERAL_TYPE)
9419 {
9420 /* An integral constant of enumeral type. See if it matches one
9421 of the enumeration values. */
9422 for (tree values = TYPE_VALUES (TREE_TYPE (t));
9423 values; values = TREE_CHAIN (values))
9424 {
9425 tree decl = TREE_VALUE (values);
9426 if (tree_int_cst_equal (DECL_INITIAL (decl), t))
9427 {
9428 if (streaming_p ())
9429 u (tt_enum_value);
9430 tree_node (decl);
9431 dump (dumper::TREE) && dump ("Written enum value %N", decl);
9432 goto done;
9433 }
9434 }
9435 /* It didn't match. We'll write it a an explicit INTEGER_CST
9436 node. */
9437 }
9438
9439 if (TYPE_P (t))
9440 {
9441 type_node (t);
9442 goto done;
9443 }
9444
9445 if (DECL_P (t))
9446 {
9447 if (DECL_TEMPLATE_PARM_P (t))
9448 {
9449 tpl_parm_value (t);
9450 goto done;
9451 }
9452
9453 if (!DECL_CONTEXT (t))
9454 {
9455 /* There are a few cases of decls with no context. We'll write
9456 these by value, but first assert they are cases we expect. */
9457 gcc_checking_assert (ref == WK_normal);
9458 switch (TREE_CODE (t))
9459 {
9460 default: gcc_unreachable ();
9461
9462 case LABEL_DECL:
9463 /* CASE_LABEL_EXPRs contain uncontexted LABEL_DECLs. */
9464 gcc_checking_assert (!DECL_NAME (t));
9465 break;
9466
9467 case VAR_DECL:
9468 /* AGGR_INIT_EXPRs cons up anonymous uncontexted VAR_DECLs. */
9469 gcc_checking_assert (!DECL_NAME (t)
9470 && DECL_ARTIFICIAL (t));
9471 break;
9472
9473 case PARM_DECL:
9474 /* REQUIRES_EXPRs have a tree list of uncontexted
9475 PARM_DECLS. It'd be nice if they had a
9476 distinguishing flag to double check. */
9477 break;
9478 }
9479 goto by_value;
9480 }
9481 }
9482
9483 skip_normal:
9484 if (DECL_P (t) && !decl_node (t, ref))
9485 goto done;
9486
9487 /* Otherwise by value */
9488 by_value:
9489 tree_value (t);
9490
9491 done:
9492 /* And, breath out. */
9493 dump.outdent ();
9494 }
9495
9496 /* Stream in a tree node. */
9497
9498 tree
9499 trees_in::tree_node (bool is_use)
9500 {
9501 if (get_overrun ())
9502 return NULL_TREE;
9503
9504 dump.indent ();
9505 int tag = i ();
9506 tree res = NULL_TREE;
9507 switch (tag)
9508 {
9509 default:
9510 /* backref, pull it out of the map. */
9511 res = back_ref (tag);
9512 break;
9513
9514 case tt_null:
9515 /* NULL_TREE. */
9516 break;
9517
9518 case tt_fixed:
9519 /* A fixed ref, find it in the fixed_ref array. */
9520 {
9521 unsigned fix = u ();
9522 if (fix < (*fixed_trees).length ())
9523 {
9524 res = (*fixed_trees)[fix];
9525 dump (dumper::TREE) && dump ("Read fixed:%u %C:%N%S", fix,
9526 TREE_CODE (res), res, res);
9527 }
9528
9529 if (!res)
9530 set_overrun ();
9531 }
9532 break;
9533
9534 case tt_parm:
9535 {
9536 tree fn = tree_node ();
9537 if (fn && TREE_CODE (fn) == FUNCTION_DECL)
9538 res = tree_node ();
9539 if (res)
9540 dump (dumper::TREE)
9541 && dump ("Read %s reference %N",
9542 TREE_CODE (res) == PARM_DECL ? "parameter" : "result",
9543 res);
9544 }
9545 break;
9546
9547 case tt_node:
9548 /* A new node. Stream it in. */
9549 res = tree_value ();
9550 break;
9551
9552 case tt_decl:
9553 /* A new decl. Stream it in. */
9554 res = decl_value ();
9555 break;
9556
9557 case tt_tpl_parm:
9558 /* A template parameter. Stream it in. */
9559 res = tpl_parm_value ();
9560 break;
9561
9562 case tt_id:
9563 /* An identifier node. */
9564 {
9565 size_t l;
9566 const char *chars = str (&l);
9567 res = get_identifier_with_length (chars, l);
9568 int tag = insert (res);
9569 dump (dumper::TREE)
9570 && dump ("Read identifier:%d %N", tag, res);
9571 }
9572 break;
9573
9574 case tt_conv_id:
9575 /* A conversion operator. Get the type and recreate the
9576 identifier. */
9577 {
9578 tree type = tree_node ();
9579 if (!get_overrun ())
9580 {
9581 res = type ? make_conv_op_name (type) : conv_op_identifier;
9582 int tag = insert (res);
9583 dump (dumper::TREE)
9584 && dump ("Created conv_op:%d %S for %N", tag, res, type);
9585 }
9586 }
9587 break;
9588
9589 case tt_anon_id:
9590 case tt_lambda_id:
9591 /* An anonymous or lambda id. */
9592 {
9593 res = make_anon_name ();
9594 if (tag == tt_lambda_id)
9595 IDENTIFIER_LAMBDA_P (res) = true;
9596 int tag = insert (res);
9597 dump (dumper::TREE)
9598 && dump ("Read %s identifier:%d %N",
9599 IDENTIFIER_LAMBDA_P (res) ? "lambda" : "anon", tag, res);
9600 }
9601 break;
9602
9603 case tt_typedef_type:
9604 res = tree_node ();
9605 if (res)
9606 {
9607 dump (dumper::TREE)
9608 && dump ("Read %stypedef %C:%N",
9609 DECL_IMPLICIT_TYPEDEF_P (res) ? "implicit " : "",
9610 TREE_CODE (res), res);
9611 res = TREE_TYPE (res);
9612 }
9613 break;
9614
9615 case tt_derived_type:
9616 /* A type derived from some other type. */
9617 {
9618 enum tree_code code = tree_code (u ());
9619 res = tree_node ();
9620
9621 switch (code)
9622 {
9623 default:
9624 set_overrun ();
9625 break;
9626
9627 case ARRAY_TYPE:
9628 {
9629 tree domain = tree_node ();
9630 int dep = u ();
9631 if (!get_overrun ())
9632 res = build_cplus_array_type (res, domain, dep);
9633 }
9634 break;
9635
9636 case COMPLEX_TYPE:
9637 if (!get_overrun ())
9638 res = build_complex_type (res);
9639 break;
9640
9641 case BOOLEAN_TYPE:
9642 {
9643 unsigned precision = u ();
9644 if (!get_overrun ())
9645 res = build_nonstandard_boolean_type (precision);
9646 }
9647 break;
9648
9649 case INTEGER_TYPE:
9650 if (res)
9651 {
9652 /* A range type (representing an array domain). */
9653 tree min = tree_node ();
9654 tree max = tree_node ();
9655
9656 if (!get_overrun ())
9657 res = build_range_type (res, min, max);
9658 }
9659 else
9660 {
9661 /* A new integral type (representing a bitfield). */
9662 unsigned enc = u ();
9663 if (!get_overrun ())
9664 res = build_nonstandard_integer_type (enc >> 1, enc & 1);
9665 }
9666 break;
9667
9668 case FUNCTION_TYPE:
9669 case METHOD_TYPE:
9670 {
9671 tree klass = code == METHOD_TYPE ? tree_node () : NULL_TREE;
9672 tree args = tree_node ();
9673 if (!get_overrun ())
9674 {
9675 if (klass)
9676 res = build_method_type_directly (klass, res, args);
9677 else
9678 res = build_function_type (res, args);
9679 }
9680 }
9681 break;
9682
9683 case OFFSET_TYPE:
9684 {
9685 tree base = tree_node ();
9686 if (!get_overrun ())
9687 res = build_offset_type (base, res);
9688 }
9689 break;
9690
9691 case POINTER_TYPE:
9692 if (!get_overrun ())
9693 res = build_pointer_type (res);
9694 break;
9695
9696 case REFERENCE_TYPE:
9697 {
9698 bool rval = bool (u ());
9699 if (!get_overrun ())
9700 res = cp_build_reference_type (res, rval);
9701 }
9702 break;
9703
9704 case DECLTYPE_TYPE:
9705 case TYPEOF_TYPE:
9706 case DEPENDENT_OPERATOR_TYPE:
9707 {
9708 tree expr = tree_node ();
9709 if (!get_overrun ())
9710 {
9711 res = cxx_make_type (code);
9712 TYPE_VALUES_RAW (res) = expr;
9713 if (code == DECLTYPE_TYPE)
9714 tree_node_bools (res);
9715 SET_TYPE_STRUCTURAL_EQUALITY (res);
9716 }
9717 }
9718 break;
9719
9720 case TRAIT_TYPE:
9721 {
9722 tree kind = tree_node ();
9723 tree type1 = tree_node ();
9724 tree type2 = tree_node ();
9725 if (!get_overrun ())
9726 {
9727 res = cxx_make_type (TRAIT_TYPE);
9728 TRAIT_TYPE_KIND_RAW (res) = kind;
9729 TRAIT_TYPE_TYPE1 (res) = type1;
9730 TRAIT_TYPE_TYPE2 (res) = type2;
9731 SET_TYPE_STRUCTURAL_EQUALITY (res);
9732 }
9733 }
9734 break;
9735
9736 case TYPE_ARGUMENT_PACK:
9737 if (!get_overrun ())
9738 {
9739 tree pack = cxx_make_type (TYPE_ARGUMENT_PACK);
9740 ARGUMENT_PACK_ARGS (pack) = res;
9741 res = pack;
9742 }
9743 break;
9744
9745 case TYPE_PACK_EXPANSION:
9746 {
9747 bool local = u ();
9748 tree param_packs = tree_node ();
9749 tree extra_args = tree_node ();
9750 if (!get_overrun ())
9751 {
9752 tree expn = cxx_make_type (TYPE_PACK_EXPANSION);
9753 SET_TYPE_STRUCTURAL_EQUALITY (expn);
9754 PACK_EXPANSION_PATTERN (expn) = res;
9755 PACK_EXPANSION_PARAMETER_PACKS (expn) = param_packs;
9756 PACK_EXPANSION_EXTRA_ARGS (expn) = extra_args;
9757 PACK_EXPANSION_LOCAL_P (expn) = local;
9758 res = expn;
9759 }
9760 }
9761 break;
9762
9763 case TYPENAME_TYPE:
9764 {
9765 tree ctx = tree_node ();
9766 tree name = tree_node ();
9767 tree fullname = tree_node ();
9768 enum tag_types tag_type = tag_types (u ());
9769
9770 if (!get_overrun ())
9771 res = build_typename_type (ctx, name, fullname, tag_type);
9772 }
9773 break;
9774
9775 case UNBOUND_CLASS_TEMPLATE:
9776 {
9777 tree ctx = tree_node ();
9778 tree name = tree_node ();
9779 tree parms = tree_node ();
9780
9781 if (!get_overrun ())
9782 res = make_unbound_class_template_raw (ctx, name, parms);
9783 }
9784 break;
9785
9786 case VECTOR_TYPE:
9787 {
9788 poly_uint64 nunits;
9789 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
9790 nunits.coeffs[ix] = wu ();
9791 if (!get_overrun ())
9792 res = build_vector_type (res, nunits);
9793 }
9794 break;
9795 }
9796
9797 int tag = i ();
9798 if (!tag)
9799 {
9800 tag = insert (res);
9801 if (res)
9802 dump (dumper::TREE)
9803 && dump ("Created:%d derived type %C", tag, code);
9804 }
9805 else
9806 res = back_ref (tag);
9807 }
9808 break;
9809
9810 case tt_variant_type:
9811 /* Variant of some type. */
9812 {
9813 res = tree_node ();
9814 int flags = i ();
9815 if (get_overrun ())
9816 ;
9817 else if (flags < 0)
9818 /* No change. */;
9819 else if (TREE_CODE (res) == FUNCTION_TYPE
9820 || TREE_CODE (res) == METHOD_TYPE)
9821 {
9822 cp_ref_qualifier rqual = cp_ref_qualifier (flags & 3);
9823 bool late = (flags >> 2) & 1;
9824 cp_cv_quals quals = cp_cv_quals (flags >> 3);
9825
9826 tree raises = tree_node ();
9827 if (raises == error_mark_node)
9828 raises = TYPE_RAISES_EXCEPTIONS (res);
9829
9830 res = build_cp_fntype_variant (res, rqual, raises, late);
9831 if (TREE_CODE (res) == FUNCTION_TYPE)
9832 res = apply_memfn_quals (res, quals, rqual);
9833 }
9834 else
9835 {
9836 res = build_aligned_type (res, (1u << flags) >> 1);
9837 TYPE_USER_ALIGN (res) = true;
9838 }
9839
9840 if (tree attribs = tree_node ())
9841 res = cp_build_type_attribute_variant (res, attribs);
9842
9843 int quals = i ();
9844 if (quals >= 0 && !get_overrun ())
9845 res = cp_build_qualified_type (res, quals);
9846
9847 int tag = i ();
9848 if (!tag)
9849 {
9850 tag = insert (res);
9851 if (res)
9852 dump (dumper::TREE)
9853 && dump ("Created:%d variant type %C", tag, TREE_CODE (res));
9854 }
9855 else
9856 res = back_ref (tag);
9857 }
9858 break;
9859
9860 case tt_tinfo_var:
9861 case tt_tinfo_typedef:
9862 /* A tinfo var or typedef. */
9863 {
9864 bool is_var = tag == tt_tinfo_var;
9865 unsigned ix = u ();
9866 tree type = NULL_TREE;
9867
9868 if (is_var)
9869 {
9870 tree name = tree_node ();
9871 type = tree_node ();
9872
9873 if (!get_overrun ())
9874 res = get_tinfo_decl_direct (type, name, int (ix));
9875 }
9876 else
9877 {
9878 if (!get_overrun ())
9879 {
9880 type = get_pseudo_tinfo_type (ix);
9881 res = TYPE_NAME (type);
9882 }
9883 }
9884 if (res)
9885 {
9886 int tag = insert (res);
9887 dump (dumper::TREE)
9888 && dump ("Created tinfo_%s:%d %S:%u for %N",
9889 is_var ? "var" : "decl", tag, res, ix, type);
9890 if (!is_var)
9891 {
9892 tag = insert (type);
9893 dump (dumper::TREE)
9894 && dump ("Created tinfo_type:%d %u %N", tag, ix, type);
9895 }
9896 }
9897 }
9898 break;
9899
9900 case tt_ptrmem_type:
9901 /* A pointer to member function. */
9902 {
9903 tree type = tree_node ();
9904 if (type && TREE_CODE (type) == POINTER_TYPE
9905 && TREE_CODE (TREE_TYPE (type)) == METHOD_TYPE)
9906 {
9907 res = build_ptrmemfunc_type (type);
9908 int tag = insert (res);
9909 dump (dumper::TREE) && dump ("Created:%d ptrmem type", tag);
9910 }
9911 else
9912 set_overrun ();
9913 }
9914 break;
9915
9916 case tt_nttp_var:
9917 /* An NTTP object. */
9918 {
9919 tree init = tree_node ();
9920 tree name = tree_node ();
9921 if (!get_overrun ())
9922 {
9923 res = get_template_parm_object (init, name);
9924 int tag = insert (res);
9925 dump (dumper::TREE)
9926 && dump ("Created nttp object:%d %N", tag, name);
9927 }
9928 }
9929 break;
9930
9931 case tt_enum_value:
9932 /* An enum const value. */
9933 {
9934 if (tree decl = tree_node ())
9935 {
9936 dump (dumper::TREE) && dump ("Read enum value %N", decl);
9937 res = DECL_INITIAL (decl);
9938 }
9939
9940 if (!res)
9941 set_overrun ();
9942 }
9943 break;
9944
9945 case tt_enum_decl:
9946 /* An enum decl. */
9947 {
9948 tree ctx = tree_node ();
9949 tree name = tree_node ();
9950
9951 if (!get_overrun ()
9952 && TREE_CODE (ctx) == ENUMERAL_TYPE)
9953 res = find_enum_member (ctx, name);
9954
9955 if (!res)
9956 set_overrun ();
9957 else
9958 {
9959 int tag = insert (res);
9960 dump (dumper::TREE)
9961 && dump ("Read enum decl:%d %C:%N", tag, TREE_CODE (res), res);
9962 }
9963 }
9964 break;
9965
9966 case tt_data_member:
9967 /* A data member. */
9968 {
9969 tree ctx = tree_node ();
9970 tree name = tree_node ();
9971
9972 if (!get_overrun ()
9973 && RECORD_OR_UNION_TYPE_P (ctx))
9974 {
9975 if (name)
9976 res = lookup_class_binding (ctx, name);
9977 else
9978 res = lookup_field_ident (ctx, u ());
9979
9980 if (!res
9981 || TREE_CODE (res) != FIELD_DECL
9982 || DECL_CONTEXT (res) != ctx)
9983 res = NULL_TREE;
9984 }
9985
9986 if (!res)
9987 set_overrun ();
9988 else
9989 {
9990 int tag = insert (res);
9991 dump (dumper::TREE)
9992 && dump ("Read member:%d %C:%N", tag, TREE_CODE (res), res);
9993 }
9994 }
9995 break;
9996
9997 case tt_binfo:
9998 /* A BINFO. Walk the tree of the dominating type. */
9999 {
10000 tree type;
10001 unsigned ix = binfo_mergeable (&type);
10002 if (type)
10003 {
10004 res = TYPE_BINFO (type);
10005 for (; ix && res; res = TREE_CHAIN (res))
10006 ix--;
10007 if (!res)
10008 set_overrun ();
10009 }
10010
10011 if (get_overrun ())
10012 break;
10013
10014 /* Insert binfo into backreferences. */
10015 tag = insert (res);
10016 dump (dumper::TREE) && dump ("Read binfo:%d %N", tag, res);
10017 }
10018 break;
10019
10020 case tt_vtable:
10021 {
10022 unsigned ix = u ();
10023 tree ctx = tree_node ();
10024 dump (dumper::TREE) && dump ("Reading vtable %N[%u]", ctx, ix);
10025 if (TREE_CODE (ctx) == RECORD_TYPE && TYPE_LANG_SPECIFIC (ctx))
10026 for (res = CLASSTYPE_VTABLES (ctx); res; res = DECL_CHAIN (res))
10027 if (!ix--)
10028 break;
10029 if (!res)
10030 set_overrun ();
10031 }
10032 break;
10033
10034 case tt_thunk:
10035 {
10036 int fixed = i ();
10037 tree target = tree_node ();
10038 tree virt = tree_node ();
10039
10040 for (tree thunk = DECL_THUNKS (target);
10041 thunk; thunk = DECL_CHAIN (thunk))
10042 if (THUNK_FIXED_OFFSET (thunk) == fixed
10043 && !THUNK_VIRTUAL_OFFSET (thunk) == !virt
10044 && (!virt
10045 || tree_int_cst_equal (virt, THUNK_VIRTUAL_OFFSET (thunk))))
10046 {
10047 res = thunk;
10048 break;
10049 }
10050
10051 int tag = insert (res);
10052 if (res)
10053 dump (dumper::TREE)
10054 && dump ("Read:%d thunk %N to %N", tag, DECL_NAME (res), target);
10055 else
10056 set_overrun ();
10057 }
10058 break;
10059
10060 case tt_clone_ref:
10061 {
10062 tree target = tree_node ();
10063 tree name = tree_node ();
10064
10065 if (DECL_P (target) && DECL_MAYBE_IN_CHARGE_CDTOR_P (target))
10066 {
10067 tree clone;
10068 FOR_EVERY_CLONE (clone, target)
10069 if (DECL_NAME (clone) == name)
10070 {
10071 res = clone;
10072 break;
10073 }
10074 }
10075
10076 /* A clone might have a different vtable entry. */
10077 if (res && DECL_VIRTUAL_P (res))
10078 DECL_VINDEX (res) = tree_node ();
10079
10080 if (!res)
10081 set_overrun ();
10082 int tag = insert (res);
10083 if (res)
10084 dump (dumper::TREE)
10085 && dump ("Read:%d clone %N of %N", tag, DECL_NAME (res), target);
10086 else
10087 set_overrun ();
10088 }
10089 break;
10090
10091 case tt_entity:
10092 /* Index into the entity table. Perhaps not loaded yet! */
10093 {
10094 unsigned origin = state->slurp->remap_module (u ());
10095 unsigned ident = u ();
10096 module_state *from = (*modules)[origin];
10097
10098 if (!origin || ident >= from->entity_num)
10099 set_overrun ();
10100 if (!get_overrun ())
10101 {
10102 binding_slot *slot = &(*entity_ary)[from->entity_lwm + ident];
10103 if (slot->is_lazy ())
10104 if (!from->lazy_load (ident, slot))
10105 set_overrun ();
10106 res = *slot;
10107 }
10108
10109 if (res)
10110 {
10111 const char *kind = (origin != state->mod ? "Imported" : "Named");
10112 int tag = insert (res);
10113 dump (dumper::TREE)
10114 && dump ("%s:%d %C:%N@%M", kind, tag, TREE_CODE (res),
10115 res, (*modules)[origin]);
10116
10117 if (!add_indirects (res))
10118 {
10119 set_overrun ();
10120 res = NULL_TREE;
10121 }
10122 }
10123 }
10124 break;
10125
10126 case tt_template:
10127 /* A template. */
10128 if (tree tpl = tree_node ())
10129 {
10130 res = DECL_TEMPLATE_RESULT (tpl);
10131 dump (dumper::TREE)
10132 && dump ("Read template %C:%N", TREE_CODE (res), res);
10133 }
10134 break;
10135 }
10136
10137 if (is_use && !unused && res && DECL_P (res) && !TREE_USED (res))
10138 {
10139 /* Mark decl used as mark_used does -- we cannot call
10140 mark_used in the middle of streaming, we only need a subset
10141 of its functionality. */
10142 TREE_USED (res) = true;
10143
10144 /* And for structured bindings also the underlying decl. */
10145 if (DECL_DECOMPOSITION_P (res) && DECL_DECOMP_BASE (res))
10146 TREE_USED (DECL_DECOMP_BASE (res)) = true;
10147
10148 if (DECL_CLONED_FUNCTION_P (res))
10149 TREE_USED (DECL_CLONED_FUNCTION (res)) = true;
10150 }
10151
10152 dump.outdent ();
10153 return res;
10154 }
10155
10156 void
10157 trees_out::tpl_parms (tree parms, unsigned &tpl_levels)
10158 {
10159 if (!parms)
10160 return;
10161
10162 if (TREE_VISITED (parms))
10163 {
10164 ref_node (parms);
10165 return;
10166 }
10167
10168 tpl_parms (TREE_CHAIN (parms), tpl_levels);
10169
10170 tree vec = TREE_VALUE (parms);
10171 unsigned len = TREE_VEC_LENGTH (vec);
10172 /* Depth. */
10173 int tag = insert (parms);
10174 if (streaming_p ())
10175 {
10176 i (len + 1);
10177 dump (dumper::TREE)
10178 && dump ("Writing template parms:%d level:%N length:%d",
10179 tag, TREE_PURPOSE (parms), len);
10180 }
10181 tree_node (TREE_PURPOSE (parms));
10182
10183 for (unsigned ix = 0; ix != len; ix++)
10184 {
10185 tree parm = TREE_VEC_ELT (vec, ix);
10186 tree decl = TREE_VALUE (parm);
10187
10188 gcc_checking_assert (DECL_TEMPLATE_PARM_P (decl));
10189 if (CHECKING_P)
10190 switch (TREE_CODE (decl))
10191 {
10192 default: gcc_unreachable ();
10193
10194 case TEMPLATE_DECL:
10195 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TEMPLATE_PARM)
10196 && (TREE_CODE (DECL_TEMPLATE_RESULT (decl)) == TYPE_DECL)
10197 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
10198 break;
10199
10200 case TYPE_DECL:
10201 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TYPE_PARM)
10202 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
10203 break;
10204
10205 case PARM_DECL:
10206 gcc_assert ((TREE_CODE (DECL_INITIAL (decl)) == TEMPLATE_PARM_INDEX)
10207 && (TREE_CODE (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))
10208 == CONST_DECL)
10209 && (DECL_TEMPLATE_PARM_P
10210 (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))));
10211 break;
10212 }
10213
10214 tree_node (decl);
10215 tree_node (TEMPLATE_PARM_CONSTRAINTS (parm));
10216 }
10217
10218 tpl_levels++;
10219 }
10220
10221 tree
10222 trees_in::tpl_parms (unsigned &tpl_levels)
10223 {
10224 tree parms = NULL_TREE;
10225
10226 while (int len = i ())
10227 {
10228 if (len < 0)
10229 {
10230 parms = back_ref (len);
10231 continue;
10232 }
10233
10234 len -= 1;
10235 parms = tree_cons (NULL_TREE, NULL_TREE, parms);
10236 int tag = insert (parms);
10237 TREE_PURPOSE (parms) = tree_node ();
10238
10239 dump (dumper::TREE)
10240 && dump ("Reading template parms:%d level:%N length:%d",
10241 tag, TREE_PURPOSE (parms), len);
10242
10243 tree vec = make_tree_vec (len);
10244 for (int ix = 0; ix != len; ix++)
10245 {
10246 tree decl = tree_node ();
10247 if (!decl)
10248 return NULL_TREE;
10249
10250 tree parm = build_tree_list (NULL, decl);
10251 TEMPLATE_PARM_CONSTRAINTS (parm) = tree_node ();
10252
10253 TREE_VEC_ELT (vec, ix) = parm;
10254 }
10255
10256 TREE_VALUE (parms) = vec;
10257 tpl_levels++;
10258 }
10259
10260 return parms;
10261 }
10262
10263 void
10264 trees_out::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
10265 {
10266 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
10267 tpl_levels--; parms = TREE_CHAIN (parms))
10268 {
10269 tree vec = TREE_VALUE (parms);
10270
10271 tree_node (TREE_TYPE (vec));
10272 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
10273 {
10274 tree parm = TREE_VEC_ELT (vec, ix);
10275 tree dflt = TREE_PURPOSE (parm);
10276 tree_node (dflt);
10277
10278 /* Template template parameters need a context of their owning
10279 template. This is quite tricky to infer correctly on stream-in
10280 (see PR c++/98881) so we'll just provide it directly. */
10281 tree decl = TREE_VALUE (parm);
10282 if (TREE_CODE (decl) == TEMPLATE_DECL)
10283 tree_node (DECL_CONTEXT (decl));
10284 }
10285 }
10286 }
10287
10288 bool
10289 trees_in::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
10290 {
10291 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
10292 tpl_levels--; parms = TREE_CHAIN (parms))
10293 {
10294 tree vec = TREE_VALUE (parms);
10295
10296 TREE_TYPE (vec) = tree_node ();
10297 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
10298 {
10299 tree parm = TREE_VEC_ELT (vec, ix);
10300 tree dflt = tree_node ();
10301 TREE_PURPOSE (parm) = dflt;
10302
10303 tree decl = TREE_VALUE (parm);
10304 if (TREE_CODE (decl) == TEMPLATE_DECL)
10305 DECL_CONTEXT (decl) = tree_node ();
10306
10307 if (get_overrun ())
10308 return false;
10309 }
10310 }
10311 return true;
10312 }
10313
10314 /* PARMS is a LIST, one node per level.
10315 TREE_VALUE is a TREE_VEC of parm info for that level.
10316 each ELT is a TREE_LIST
10317 TREE_VALUE is PARM_DECL, TYPE_DECL or TEMPLATE_DECL
10318 TREE_PURPOSE is the default value. */
10319
10320 void
10321 trees_out::tpl_header (tree tpl, unsigned *tpl_levels)
10322 {
10323 tree parms = DECL_TEMPLATE_PARMS (tpl);
10324 tpl_parms (parms, *tpl_levels);
10325
10326 /* Mark end. */
10327 if (streaming_p ())
10328 u (0);
10329
10330 if (*tpl_levels)
10331 tree_node (TEMPLATE_PARMS_CONSTRAINTS (parms));
10332 }
10333
10334 bool
10335 trees_in::tpl_header (tree tpl, unsigned *tpl_levels)
10336 {
10337 tree parms = tpl_parms (*tpl_levels);
10338 if (!parms)
10339 return false;
10340
10341 DECL_TEMPLATE_PARMS (tpl) = parms;
10342
10343 if (*tpl_levels)
10344 TEMPLATE_PARMS_CONSTRAINTS (parms) = tree_node ();
10345
10346 return true;
10347 }
10348
10349 /* Stream skeleton parm nodes, with their flags, type & parm indices.
10350 All the parms will have consecutive tags. */
10351
10352 void
10353 trees_out::fn_parms_init (tree fn)
10354 {
10355 /* First init them. */
10356 int base_tag = ref_num - 1;
10357 int ix = 0;
10358 for (tree parm = DECL_ARGUMENTS (fn);
10359 parm; parm = DECL_CHAIN (parm), ix++)
10360 {
10361 if (streaming_p ())
10362 {
10363 start (parm);
10364 tree_node_bools (parm);
10365 }
10366 int tag = insert (parm);
10367 gcc_checking_assert (base_tag - ix == tag);
10368 }
10369 /* Mark the end. */
10370 if (streaming_p ())
10371 u (0);
10372
10373 /* Now stream their contents. */
10374 ix = 0;
10375 for (tree parm = DECL_ARGUMENTS (fn);
10376 parm; parm = DECL_CHAIN (parm), ix++)
10377 {
10378 if (streaming_p ())
10379 dump (dumper::TREE)
10380 && dump ("Writing parm:%d %u (%N) of %N",
10381 base_tag - ix, ix, parm, fn);
10382 tree_node_vals (parm);
10383 }
10384
10385 if (!streaming_p ())
10386 {
10387 /* We must walk contract attrs so the dependency graph is complete. */
10388 for (tree contract = DECL_CONTRACTS (fn);
10389 contract;
10390 contract = CONTRACT_CHAIN (contract))
10391 tree_node (contract);
10392 }
10393
10394 /* Write a reference to contracts pre/post functions, if any, to avoid
10395 regenerating them in importers. */
10396 tree_node (DECL_PRE_FN (fn));
10397 tree_node (DECL_POST_FN (fn));
10398 }
10399
10400 /* Build skeleton parm nodes, read their flags, type & parm indices. */
10401
10402 int
10403 trees_in::fn_parms_init (tree fn)
10404 {
10405 int base_tag = ~(int)back_refs.length ();
10406
10407 tree *parm_ptr = &DECL_ARGUMENTS (fn);
10408 int ix = 0;
10409 for (; int code = u (); ix++)
10410 {
10411 tree parm = start (code);
10412 if (!tree_node_bools (parm))
10413 return 0;
10414
10415 int tag = insert (parm);
10416 gcc_checking_assert (base_tag - ix == tag);
10417 *parm_ptr = parm;
10418 parm_ptr = &DECL_CHAIN (parm);
10419 }
10420
10421 ix = 0;
10422 for (tree parm = DECL_ARGUMENTS (fn);
10423 parm; parm = DECL_CHAIN (parm), ix++)
10424 {
10425 dump (dumper::TREE)
10426 && dump ("Reading parm:%d %u (%N) of %N",
10427 base_tag - ix, ix, parm, fn);
10428 if (!tree_node_vals (parm))
10429 return 0;
10430 }
10431
10432 /* Reload references to contract functions, if any. */
10433 tree pre_fn = tree_node ();
10434 tree post_fn = tree_node ();
10435 set_contract_functions (fn, pre_fn, post_fn);
10436
10437 return base_tag;
10438 }
10439
10440 /* Read the remaining parm node data. Replace with existing (if
10441 non-null) in the map. */
10442
10443 void
10444 trees_in::fn_parms_fini (int tag, tree fn, tree existing, bool is_defn)
10445 {
10446 tree existing_parm = existing ? DECL_ARGUMENTS (existing) : NULL_TREE;
10447 tree parms = DECL_ARGUMENTS (fn);
10448 unsigned ix = 0;
10449 for (tree parm = parms; parm; parm = DECL_CHAIN (parm), ix++)
10450 {
10451 if (existing_parm)
10452 {
10453 if (is_defn && !DECL_SAVED_TREE (existing))
10454 {
10455 /* If we're about to become the definition, set the
10456 names of the parms from us. */
10457 DECL_NAME (existing_parm) = DECL_NAME (parm);
10458 DECL_SOURCE_LOCATION (existing_parm) = DECL_SOURCE_LOCATION (parm);
10459 }
10460
10461 back_refs[~tag] = existing_parm;
10462 existing_parm = DECL_CHAIN (existing_parm);
10463 }
10464 tag--;
10465 }
10466 }
10467
10468 /* Encode into KEY the position of the local type (class or enum)
10469 declaration DECL within FN. The position is encoded as the
10470 index of the innermost BLOCK (numbered in BFS order) along with
10471 the index within its BLOCK_VARS list. */
10472
10473 void
10474 trees_out::key_local_type (merge_key& key, tree decl, tree fn)
10475 {
10476 auto_vec<tree, 4> blocks;
10477 blocks.quick_push (DECL_INITIAL (fn));
10478 unsigned block_ix = 0;
10479 while (block_ix != blocks.length ())
10480 {
10481 tree block = blocks[block_ix];
10482 unsigned decl_ix = 0;
10483 for (tree var = BLOCK_VARS (block); var; var = DECL_CHAIN (var))
10484 {
10485 if (TREE_CODE (var) != TYPE_DECL)
10486 continue;
10487 if (var == decl)
10488 {
10489 key.index = (block_ix << 10) | decl_ix;
10490 return;
10491 }
10492 ++decl_ix;
10493 }
10494 for (tree sub = BLOCK_SUBBLOCKS (block); sub; sub = BLOCK_CHAIN (sub))
10495 blocks.safe_push (sub);
10496 ++block_ix;
10497 }
10498
10499 /* Not-found value. */
10500 key.index = 1023;
10501 }
10502
10503 /* Look up the local type corresponding at the position encoded by
10504 KEY within FN and named NAME. */
10505
10506 tree
10507 trees_in::key_local_type (const merge_key& key, tree fn, tree name)
10508 {
10509 if (!DECL_INITIAL (fn))
10510 return NULL_TREE;
10511
10512 const unsigned block_pos = key.index >> 10;
10513 const unsigned decl_pos = key.index & 1023;
10514
10515 if (decl_pos == 1023)
10516 return NULL_TREE;
10517
10518 auto_vec<tree, 4> blocks;
10519 blocks.quick_push (DECL_INITIAL (fn));
10520 unsigned block_ix = 0;
10521 while (block_ix != blocks.length ())
10522 {
10523 tree block = blocks[block_ix];
10524 if (block_ix == block_pos)
10525 {
10526 unsigned decl_ix = 0;
10527 for (tree var = BLOCK_VARS (block); var; var = DECL_CHAIN (var))
10528 {
10529 if (TREE_CODE (var) != TYPE_DECL)
10530 continue;
10531 /* Prefer using the identifier as the key for more robustness
10532 to ODR violations, except for anonymous types since their
10533 compiler-generated identifiers aren't stable. */
10534 if (IDENTIFIER_ANON_P (name)
10535 ? decl_ix == decl_pos
10536 : DECL_NAME (var) == name)
10537 return var;
10538 ++decl_ix;
10539 }
10540 return NULL_TREE;
10541 }
10542 for (tree sub = BLOCK_SUBBLOCKS (block); sub; sub = BLOCK_CHAIN (sub))
10543 blocks.safe_push (sub);
10544 ++block_ix;
10545 }
10546
10547 return NULL_TREE;
10548 }
10549
10550 /* DEP is the depset of some decl we're streaming by value. Determine
10551 the merging behaviour. */
10552
10553 merge_kind
10554 trees_out::get_merge_kind (tree decl, depset *dep)
10555 {
10556 if (!dep)
10557 {
10558 if (VAR_OR_FUNCTION_DECL_P (decl))
10559 {
10560 /* Any var or function with template info should have DEP. */
10561 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
10562 || !DECL_TEMPLATE_INFO (decl));
10563 if (DECL_LOCAL_DECL_P (decl))
10564 return MK_unique;
10565 }
10566
10567 /* Either unique, or some member of a class that cannot have an
10568 out-of-class definition. For instance a FIELD_DECL. */
10569 tree ctx = CP_DECL_CONTEXT (decl);
10570 if (TREE_CODE (ctx) == FUNCTION_DECL)
10571 {
10572 /* USING_DECLs and NAMESPACE_DECLs cannot have DECL_TEMPLATE_INFO --
10573 this isn't permitting them to have one. */
10574 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
10575 || TREE_CODE (decl) == NAMESPACE_DECL
10576 || !DECL_LANG_SPECIFIC (decl)
10577 || !DECL_TEMPLATE_INFO (decl));
10578
10579 return MK_unique;
10580 }
10581
10582 if (TREE_CODE (decl) == TEMPLATE_DECL
10583 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10584 return MK_local_friend;
10585
10586 gcc_checking_assert (TYPE_P (ctx));
10587 if (TREE_CODE (decl) == USING_DECL)
10588 return MK_field;
10589
10590 if (TREE_CODE (decl) == FIELD_DECL)
10591 {
10592 if (DECL_NAME (decl))
10593 {
10594 /* Anonymous FIELD_DECLs have a NULL name. */
10595 gcc_checking_assert (!IDENTIFIER_ANON_P (DECL_NAME (decl)));
10596 return MK_named;
10597 }
10598
10599 if (!DECL_NAME (decl)
10600 && !RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
10601 && !DECL_BIT_FIELD_REPRESENTATIVE (decl))
10602 {
10603 /* The underlying storage unit for a bitfield. We do not
10604 need to dedup it, because it's only reachable through
10605 the bitfields it represents. And those are deduped. */
10606 // FIXME: Is that assertion correct -- do we ever fish it
10607 // out and put it in an expr?
10608 gcc_checking_assert ((TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE
10609 ? TREE_CODE (TREE_TYPE (TREE_TYPE (decl)))
10610 : TREE_CODE (TREE_TYPE (decl)))
10611 == INTEGER_TYPE);
10612 return MK_unique;
10613 }
10614
10615 return MK_field;
10616 }
10617
10618 if (TREE_CODE (decl) == CONST_DECL)
10619 return MK_named;
10620
10621 if (TREE_CODE (decl) == VAR_DECL
10622 && DECL_VTABLE_OR_VTT_P (decl))
10623 return MK_vtable;
10624
10625 if (DECL_THUNK_P (decl))
10626 /* Thunks are unique-enough, because they're only referenced
10627 from the vtable. And that's either new (so we want the
10628 thunks), or it's a duplicate (so it will be dropped). */
10629 return MK_unique;
10630
10631 /* There should be no other cases. */
10632 gcc_unreachable ();
10633 }
10634
10635 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
10636 && TREE_CODE (decl) != USING_DECL
10637 && TREE_CODE (decl) != CONST_DECL);
10638
10639 if (is_key_order ())
10640 {
10641 /* When doing the mergeablilty graph, there's an indirection to
10642 the actual depset. */
10643 gcc_assert (dep->is_special ());
10644 dep = dep->deps[0];
10645 }
10646
10647 gcc_checking_assert (decl == dep->get_entity ());
10648
10649 merge_kind mk = MK_named;
10650 switch (dep->get_entity_kind ())
10651 {
10652 default:
10653 gcc_unreachable ();
10654
10655 case depset::EK_PARTIAL:
10656 mk = MK_partial;
10657 break;
10658
10659 case depset::EK_DECL:
10660 {
10661 tree ctx = CP_DECL_CONTEXT (decl);
10662
10663 switch (TREE_CODE (ctx))
10664 {
10665 default:
10666 gcc_unreachable ();
10667
10668 case FUNCTION_DECL:
10669 gcc_checking_assert
10670 (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl)));
10671
10672 mk = MK_local_type;
10673 break;
10674
10675 case RECORD_TYPE:
10676 case UNION_TYPE:
10677 case NAMESPACE_DECL:
10678 if (DECL_NAME (decl) == as_base_identifier)
10679 {
10680 mk = MK_as_base;
10681 break;
10682 }
10683
10684 /* A lambda may have a class as its context, even though it
10685 isn't a member in the traditional sense; see the test
10686 g++.dg/modules/lambda-6_a.C. */
10687 if (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl))
10688 && LAMBDA_TYPE_P (TREE_TYPE (decl)))
10689 if (tree scope
10690 = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10691 (TREE_TYPE (decl))))
10692 {
10693 /* Lambdas attached to fields are keyed to its class. */
10694 if (TREE_CODE (scope) == FIELD_DECL)
10695 scope = TYPE_NAME (DECL_CONTEXT (scope));
10696 if (DECL_LANG_SPECIFIC (scope)
10697 && DECL_MODULE_KEYED_DECLS_P (scope))
10698 {
10699 mk = MK_keyed;
10700 break;
10701 }
10702 }
10703
10704 if (TREE_CODE (decl) == TEMPLATE_DECL
10705 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10706 {
10707 mk = MK_local_friend;
10708 break;
10709 }
10710
10711 if (IDENTIFIER_ANON_P (DECL_NAME (decl)))
10712 {
10713 if (RECORD_OR_UNION_TYPE_P (ctx))
10714 mk = MK_field;
10715 else if (DECL_IMPLICIT_TYPEDEF_P (decl)
10716 && UNSCOPED_ENUM_P (TREE_TYPE (decl))
10717 && TYPE_VALUES (TREE_TYPE (decl)))
10718 /* Keyed by first enum value, and underlying type. */
10719 mk = MK_enum;
10720 else
10721 /* No way to merge it, it is an ODR land-mine. */
10722 mk = MK_unique;
10723 }
10724 }
10725 }
10726 break;
10727
10728 case depset::EK_SPECIALIZATION:
10729 {
10730 gcc_checking_assert (dep->is_special ());
10731
10732 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
10733 /* An block-scope classes of templates are themselves
10734 templates. */
10735 gcc_checking_assert (DECL_IMPLICIT_TYPEDEF_P (decl));
10736
10737 if (dep->is_friend_spec ())
10738 mk = MK_friend_spec;
10739 else if (dep->is_type_spec ())
10740 mk = MK_type_spec;
10741 else
10742 mk = MK_decl_spec;
10743
10744 if (TREE_CODE (decl) == TEMPLATE_DECL)
10745 {
10746 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10747 if (TREE_CODE (entry->spec) != TEMPLATE_DECL)
10748 mk = merge_kind (mk | MK_tmpl_tmpl_mask);
10749 }
10750 }
10751 break;
10752 }
10753
10754 return mk;
10755 }
10756
10757
10758 /* The container of DECL -- not necessarily its context! */
10759
10760 tree
10761 trees_out::decl_container (tree decl)
10762 {
10763 int use_tpl;
10764 tree tpl = NULL_TREE;
10765 if (tree template_info = node_template_info (decl, use_tpl))
10766 tpl = TI_TEMPLATE (template_info);
10767 if (tpl == decl)
10768 tpl = nullptr;
10769
10770 /* Stream the template we're instantiated from. */
10771 tree_node (tpl);
10772
10773 tree container = NULL_TREE;
10774 if (TREE_CODE (decl) == TEMPLATE_DECL
10775 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10776 container = DECL_CHAIN (decl);
10777 else
10778 container = CP_DECL_CONTEXT (decl);
10779
10780 if (TYPE_P (container))
10781 container = TYPE_NAME (container);
10782
10783 tree_node (container);
10784
10785 return container;
10786 }
10787
10788 tree
10789 trees_in::decl_container ()
10790 {
10791 /* The maybe-template. */
10792 (void)tree_node ();
10793
10794 tree container = tree_node ();
10795
10796 return container;
10797 }
10798
10799 /* Write out key information about a mergeable DEP. Does not write
10800 the contents of DEP itself. The context has already been
10801 written. The container has already been streamed. */
10802
10803 void
10804 trees_out::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
10805 tree container, depset *dep)
10806 {
10807 if (dep && is_key_order ())
10808 {
10809 gcc_checking_assert (dep->is_special ());
10810 dep = dep->deps[0];
10811 }
10812
10813 if (streaming_p ())
10814 dump (dumper::MERGE)
10815 && dump ("Writing:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
10816 dep ? dep->entity_kind_name () : "contained",
10817 TREE_CODE (decl), decl);
10818
10819 /* Now write the locating information. */
10820 if (mk & MK_template_mask)
10821 {
10822 /* Specializations are located via their originating template,
10823 and the set of template args they specialize. */
10824 gcc_checking_assert (dep && dep->is_special ());
10825 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10826
10827 tree_node (entry->tmpl);
10828 tree_node (entry->args);
10829 if (mk & MK_tmpl_decl_mask)
10830 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
10831 {
10832 /* Variable template partial specializations might need
10833 constraints (see spec_hasher::equal). It's simpler to
10834 write NULL when we don't need them. */
10835 tree constraints = NULL_TREE;
10836
10837 if (uses_template_parms (entry->args))
10838 constraints = get_constraints (inner);
10839 tree_node (constraints);
10840 }
10841
10842 if (CHECKING_P)
10843 {
10844 /* Make sure we can locate the decl. */
10845 tree existing = match_mergeable_specialization
10846 (bool (mk & MK_tmpl_decl_mask), entry);
10847
10848 gcc_assert (existing);
10849 if (mk & MK_tmpl_decl_mask)
10850 {
10851 if (mk & MK_tmpl_tmpl_mask)
10852 existing = DECL_TI_TEMPLATE (existing);
10853 }
10854 else
10855 {
10856 if (mk & MK_tmpl_tmpl_mask)
10857 existing = CLASSTYPE_TI_TEMPLATE (existing);
10858 else
10859 existing = TYPE_NAME (existing);
10860 }
10861
10862 /* The walkabout should have found ourselves. */
10863 gcc_checking_assert (TREE_CODE (decl) == TYPE_DECL
10864 ? same_type_p (TREE_TYPE (decl),
10865 TREE_TYPE (existing))
10866 : existing == decl);
10867 }
10868 }
10869 else if (mk != MK_unique)
10870 {
10871 merge_key key;
10872 tree name = DECL_NAME (decl);
10873
10874 switch (mk)
10875 {
10876 default:
10877 gcc_unreachable ();
10878
10879 case MK_named:
10880 case MK_friend_spec:
10881 if (IDENTIFIER_CONV_OP_P (name))
10882 name = conv_op_identifier;
10883
10884 if (TREE_CODE (inner) == FUNCTION_DECL)
10885 {
10886 /* Functions are distinguished by parameter types. */
10887 tree fn_type = TREE_TYPE (inner);
10888
10889 key.ref_q = type_memfn_rqual (fn_type);
10890 key.args = TYPE_ARG_TYPES (fn_type);
10891
10892 if (tree reqs = get_constraints (inner))
10893 {
10894 if (cxx_dialect < cxx20)
10895 reqs = CI_ASSOCIATED_CONSTRAINTS (reqs);
10896 else
10897 reqs = CI_DECLARATOR_REQS (reqs);
10898 key.constraints = reqs;
10899 }
10900
10901 if (IDENTIFIER_CONV_OP_P (name)
10902 || (decl != inner
10903 && !(name == fun_identifier
10904 /* In case the user names something _FUN */
10905 && LAMBDA_TYPE_P (DECL_CONTEXT (inner)))))
10906 /* And a function template, or conversion operator needs
10907 the return type. Except for the _FUN thunk of a
10908 generic lambda, which has a recursive decl_type'd
10909 return type. */
10910 // FIXME: What if the return type is a voldemort?
10911 key.ret = fndecl_declared_return_type (inner);
10912 }
10913 break;
10914
10915 case MK_field:
10916 {
10917 unsigned ix = 0;
10918 if (TREE_CODE (inner) != FIELD_DECL)
10919 name = NULL_TREE;
10920 else
10921 gcc_checking_assert (!name || !IDENTIFIER_ANON_P (name));
10922
10923 for (tree field = TYPE_FIELDS (TREE_TYPE (container));
10924 ; field = DECL_CHAIN (field))
10925 {
10926 tree finner = STRIP_TEMPLATE (field);
10927 if (TREE_CODE (finner) == TREE_CODE (inner))
10928 {
10929 if (finner == inner)
10930 break;
10931 ix++;
10932 }
10933 }
10934 key.index = ix;
10935 }
10936 break;
10937
10938 case MK_vtable:
10939 {
10940 tree vtable = CLASSTYPE_VTABLES (TREE_TYPE (container));
10941 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
10942 if (vtable == decl)
10943 {
10944 key.index = ix;
10945 break;
10946 }
10947 name = NULL_TREE;
10948 }
10949 break;
10950
10951 case MK_as_base:
10952 gcc_checking_assert
10953 (decl == TYPE_NAME (CLASSTYPE_AS_BASE (TREE_TYPE (container))));
10954 break;
10955
10956 case MK_local_friend:
10957 {
10958 /* Find by index on the class's DECL_LIST */
10959 unsigned ix = 0;
10960 for (tree decls = CLASSTYPE_DECL_LIST (TREE_CHAIN (decl));
10961 decls; decls = TREE_CHAIN (decls))
10962 if (!TREE_PURPOSE (decls))
10963 {
10964 tree frnd = friend_from_decl_list (TREE_VALUE (decls));
10965 if (frnd == decl)
10966 break;
10967 ix++;
10968 }
10969 key.index = ix;
10970 name = NULL_TREE;
10971 }
10972 break;
10973
10974 case MK_local_type:
10975 key_local_type (key, STRIP_TEMPLATE (decl), container);
10976 break;
10977
10978 case MK_enum:
10979 {
10980 /* Anonymous enums are located by their first identifier,
10981 and underlying type. */
10982 tree type = TREE_TYPE (decl);
10983
10984 gcc_checking_assert (UNSCOPED_ENUM_P (type));
10985 /* Using the type name drops the bit precision we might
10986 have been using on the enum. */
10987 key.ret = TYPE_NAME (ENUM_UNDERLYING_TYPE (type));
10988 if (tree values = TYPE_VALUES (type))
10989 name = DECL_NAME (TREE_VALUE (values));
10990 }
10991 break;
10992
10993 case MK_keyed:
10994 {
10995 gcc_checking_assert (LAMBDA_TYPE_P (TREE_TYPE (inner)));
10996 tree scope = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10997 (TREE_TYPE (inner)));
10998 gcc_checking_assert (TREE_CODE (scope) == VAR_DECL
10999 || TREE_CODE (scope) == FIELD_DECL
11000 || TREE_CODE (scope) == PARM_DECL
11001 || TREE_CODE (scope) == TYPE_DECL);
11002 /* Lambdas attached to fields are keyed to the class. */
11003 if (TREE_CODE (scope) == FIELD_DECL)
11004 scope = TYPE_NAME (DECL_CONTEXT (scope));
11005 auto *root = keyed_table->get (scope);
11006 unsigned ix = root->length ();
11007 /* If we don't find it, we'll write a really big number
11008 that the reader will ignore. */
11009 while (ix--)
11010 if ((*root)[ix] == inner)
11011 break;
11012
11013 /* Use the keyed-to decl as the 'name'. */
11014 name = scope;
11015 key.index = ix;
11016 }
11017 break;
11018
11019 case MK_partial:
11020 {
11021 tree ti = get_template_info (inner);
11022 key.constraints = get_constraints (inner);
11023 key.ret = TI_TEMPLATE (ti);
11024 key.args = TI_ARGS (ti);
11025 }
11026 break;
11027 }
11028
11029 tree_node (name);
11030 if (streaming_p ())
11031 {
11032 unsigned code = (key.ref_q << 0) | (key.index << 2);
11033 u (code);
11034 }
11035
11036 if (mk == MK_enum)
11037 tree_node (key.ret);
11038 else if (mk == MK_partial
11039 || (mk == MK_named && inner
11040 && TREE_CODE (inner) == FUNCTION_DECL))
11041 {
11042 tree_node (key.ret);
11043 tree arg = key.args;
11044 if (mk == MK_named)
11045 while (arg && arg != void_list_node)
11046 {
11047 tree_node (TREE_VALUE (arg));
11048 arg = TREE_CHAIN (arg);
11049 }
11050 tree_node (arg);
11051 tree_node (key.constraints);
11052 }
11053 }
11054 }
11055
11056 /* DECL is a new declaration that may be duplicated in OVL. Use RET &
11057 ARGS to find its clone, or NULL. If DECL's DECL_NAME is NULL, this
11058 has been found by a proxy. It will be an enum type located by its
11059 first member.
11060
11061 We're conservative with matches, so ambiguous decls will be
11062 registered as different, then lead to a lookup error if the two
11063 modules are both visible. Perhaps we want to do something similar
11064 to duplicate decls to get ODR errors on loading? We already have
11065 some special casing for namespaces. */
11066
11067 static tree
11068 check_mergeable_decl (merge_kind mk, tree decl, tree ovl, merge_key const &key)
11069 {
11070 tree found = NULL_TREE;
11071 for (ovl_iterator iter (ovl); !found && iter; ++iter)
11072 {
11073 tree match = *iter;
11074
11075 tree d_inner = decl;
11076 tree m_inner = match;
11077
11078 again:
11079 if (TREE_CODE (d_inner) != TREE_CODE (m_inner))
11080 {
11081 if (TREE_CODE (match) == NAMESPACE_DECL
11082 && !DECL_NAMESPACE_ALIAS (match))
11083 /* Namespaces are never overloaded. */
11084 found = match;
11085
11086 continue;
11087 }
11088
11089 switch (TREE_CODE (d_inner))
11090 {
11091 case TEMPLATE_DECL:
11092 if (template_heads_equivalent_p (d_inner, m_inner))
11093 {
11094 d_inner = DECL_TEMPLATE_RESULT (d_inner);
11095 m_inner = DECL_TEMPLATE_RESULT (m_inner);
11096 if (d_inner == error_mark_node
11097 && TYPE_DECL_ALIAS_P (m_inner))
11098 {
11099 found = match;
11100 break;
11101 }
11102 goto again;
11103 }
11104 break;
11105
11106 case FUNCTION_DECL:
11107 if (tree m_type = TREE_TYPE (m_inner))
11108 if ((!key.ret
11109 || same_type_p (key.ret, fndecl_declared_return_type (m_inner)))
11110 && type_memfn_rqual (m_type) == key.ref_q
11111 && compparms (key.args, TYPE_ARG_TYPES (m_type))
11112 /* Reject if old is a "C" builtin and new is not "C".
11113 Matches decls_match behaviour. */
11114 && (!DECL_IS_UNDECLARED_BUILTIN (m_inner)
11115 || !DECL_EXTERN_C_P (m_inner)
11116 || DECL_EXTERN_C_P (d_inner))
11117 /* Reject if one is a different member of a
11118 guarded/pre/post fn set. */
11119 && (!flag_contracts
11120 || (DECL_IS_PRE_FN_P (d_inner)
11121 == DECL_IS_PRE_FN_P (m_inner)))
11122 && (!flag_contracts
11123 || (DECL_IS_POST_FN_P (d_inner)
11124 == DECL_IS_POST_FN_P (m_inner))))
11125 {
11126 tree m_reqs = get_constraints (m_inner);
11127 if (m_reqs)
11128 {
11129 if (cxx_dialect < cxx20)
11130 m_reqs = CI_ASSOCIATED_CONSTRAINTS (m_reqs);
11131 else
11132 m_reqs = CI_DECLARATOR_REQS (m_reqs);
11133 }
11134
11135 if (cp_tree_equal (key.constraints, m_reqs))
11136 found = match;
11137 }
11138 break;
11139
11140 case TYPE_DECL:
11141 if (DECL_IMPLICIT_TYPEDEF_P (d_inner)
11142 == DECL_IMPLICIT_TYPEDEF_P (m_inner))
11143 {
11144 if (!IDENTIFIER_ANON_P (DECL_NAME (m_inner)))
11145 return match;
11146 else if (mk == MK_enum
11147 && (TYPE_NAME (ENUM_UNDERLYING_TYPE (TREE_TYPE (m_inner)))
11148 == key.ret))
11149 found = match;
11150 }
11151 break;
11152
11153 default:
11154 found = match;
11155 break;
11156 }
11157 }
11158
11159 return found;
11160 }
11161
11162 /* DECL, INNER & TYPE are a skeleton set of nodes for a decl. Only
11163 the bools have been filled in. Read its merging key and merge it.
11164 Returns the existing decl if there is one. */
11165
11166 tree
11167 trees_in::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
11168 tree type, tree container, bool is_attached)
11169 {
11170 const char *kind = "new";
11171 tree existing = NULL_TREE;
11172
11173 if (mk & MK_template_mask)
11174 {
11175 // FIXME: We could stream the specialization hash?
11176 spec_entry spec;
11177 spec.tmpl = tree_node ();
11178 spec.args = tree_node ();
11179
11180 if (get_overrun ())
11181 return error_mark_node;
11182
11183 DECL_NAME (decl) = DECL_NAME (spec.tmpl);
11184 DECL_CONTEXT (decl) = DECL_CONTEXT (spec.tmpl);
11185 DECL_NAME (inner) = DECL_NAME (decl);
11186 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
11187
11188 tree constr = NULL_TREE;
11189 bool is_decl = mk & MK_tmpl_decl_mask;
11190 if (is_decl)
11191 {
11192 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
11193 {
11194 constr = tree_node ();
11195 if (constr)
11196 set_constraints (inner, constr);
11197 }
11198 spec.spec = (mk & MK_tmpl_tmpl_mask) ? inner : decl;
11199 }
11200 else
11201 spec.spec = type;
11202 existing = match_mergeable_specialization (is_decl, &spec);
11203 if (constr)
11204 /* We'll add these back later, if this is the new decl. */
11205 remove_constraints (inner);
11206
11207 if (!existing)
11208 ; /* We'll add to the table once read. */
11209 else if (mk & MK_tmpl_decl_mask)
11210 {
11211 /* A declaration specialization. */
11212 if (mk & MK_tmpl_tmpl_mask)
11213 existing = DECL_TI_TEMPLATE (existing);
11214 }
11215 else
11216 {
11217 /* A type specialization. */
11218 if (mk & MK_tmpl_tmpl_mask)
11219 existing = CLASSTYPE_TI_TEMPLATE (existing);
11220 else
11221 existing = TYPE_NAME (existing);
11222 }
11223 }
11224 else if (mk == MK_unique)
11225 kind = "unique";
11226 else
11227 {
11228 tree name = tree_node ();
11229
11230 merge_key key;
11231 unsigned code = u ();
11232 key.ref_q = cp_ref_qualifier ((code >> 0) & 3);
11233 key.index = code >> 2;
11234
11235 if (mk == MK_enum)
11236 key.ret = tree_node ();
11237 else if (mk == MK_partial
11238 || ((mk == MK_named || mk == MK_friend_spec)
11239 && TREE_CODE (inner) == FUNCTION_DECL))
11240 {
11241 key.ret = tree_node ();
11242 tree arg, *arg_ptr = &key.args;
11243 while ((arg = tree_node ())
11244 && arg != void_list_node
11245 && mk != MK_partial)
11246 {
11247 *arg_ptr = tree_cons (NULL_TREE, arg, NULL_TREE);
11248 arg_ptr = &TREE_CHAIN (*arg_ptr);
11249 }
11250 *arg_ptr = arg;
11251 key.constraints = tree_node ();
11252 }
11253
11254 if (get_overrun ())
11255 return error_mark_node;
11256
11257 if (mk < MK_indirect_lwm)
11258 {
11259 DECL_NAME (decl) = name;
11260 DECL_CONTEXT (decl) = FROB_CONTEXT (container);
11261 }
11262 DECL_NAME (inner) = DECL_NAME (decl);
11263 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
11264
11265 if (mk == MK_partial)
11266 {
11267 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (key.ret);
11268 spec; spec = TREE_CHAIN (spec))
11269 {
11270 tree tmpl = TREE_VALUE (spec);
11271 tree ti = get_template_info (tmpl);
11272 if (template_args_equal (key.args, TI_ARGS (ti))
11273 && cp_tree_equal (key.constraints,
11274 get_constraints
11275 (DECL_TEMPLATE_RESULT (tmpl))))
11276 {
11277 existing = tmpl;
11278 break;
11279 }
11280 }
11281 }
11282 else if (mk == MK_keyed
11283 && DECL_LANG_SPECIFIC (name)
11284 && DECL_MODULE_KEYED_DECLS_P (name))
11285 {
11286 gcc_checking_assert (TREE_CODE (container) == NAMESPACE_DECL
11287 || TREE_CODE (container) == TYPE_DECL);
11288 if (auto *set = keyed_table->get (name))
11289 if (key.index < set->length ())
11290 {
11291 existing = (*set)[key.index];
11292 if (existing)
11293 {
11294 gcc_checking_assert
11295 (DECL_IMPLICIT_TYPEDEF_P (existing));
11296 if (inner != decl)
11297 existing
11298 = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (existing));
11299 }
11300 }
11301 }
11302 else
11303 switch (TREE_CODE (container))
11304 {
11305 default:
11306 gcc_unreachable ();
11307
11308 case NAMESPACE_DECL:
11309 if (is_attached
11310 && !(state->is_module () || state->is_partition ()))
11311 kind = "unique";
11312 else
11313 {
11314 gcc_checking_assert (mk == MK_named || mk == MK_enum);
11315 tree mvec;
11316 tree *vslot = mergeable_namespace_slots (container, name,
11317 is_attached, &mvec);
11318 existing = check_mergeable_decl (mk, decl, *vslot, key);
11319 if (!existing)
11320 add_mergeable_namespace_entity (vslot, decl);
11321 else
11322 {
11323 /* Note that we now have duplicates to deal with in
11324 name lookup. */
11325 if (is_attached)
11326 BINDING_VECTOR_PARTITION_DUPS_P (mvec) = true;
11327 else
11328 BINDING_VECTOR_GLOBAL_DUPS_P (mvec) = true;
11329 }
11330 }
11331 break;
11332
11333 case FUNCTION_DECL:
11334 gcc_checking_assert (mk == MK_local_type);
11335 existing = key_local_type (key, container, name);
11336 if (existing && inner != decl)
11337 existing = TYPE_TI_TEMPLATE (TREE_TYPE (existing));
11338 break;
11339
11340 case TYPE_DECL:
11341 if (is_attached && !(state->is_module () || state->is_partition ())
11342 /* Implicit member functions can come from
11343 anywhere. */
11344 && !(DECL_ARTIFICIAL (decl)
11345 && TREE_CODE (decl) == FUNCTION_DECL
11346 && !DECL_THUNK_P (decl)))
11347 kind = "unique";
11348 else
11349 {
11350 tree ctx = TREE_TYPE (container);
11351
11352 /* For some reason templated enumeral types are not marked
11353 as COMPLETE_TYPE_P, even though they have members.
11354 This may well be a bug elsewhere. */
11355 if (TREE_CODE (ctx) == ENUMERAL_TYPE)
11356 existing = find_enum_member (ctx, name);
11357 else if (COMPLETE_TYPE_P (ctx))
11358 {
11359 switch (mk)
11360 {
11361 default:
11362 gcc_unreachable ();
11363
11364 case MK_named:
11365 existing = lookup_class_binding (ctx, name);
11366 if (existing)
11367 {
11368 tree inner = decl;
11369 if (TREE_CODE (inner) == TEMPLATE_DECL
11370 && !DECL_MEMBER_TEMPLATE_P (inner))
11371 inner = DECL_TEMPLATE_RESULT (inner);
11372
11373 existing = check_mergeable_decl
11374 (mk, inner, existing, key);
11375
11376 if (!existing && DECL_ALIAS_TEMPLATE_P (decl))
11377 {} // FIXME: Insert into specialization
11378 // tables, we'll need the arguments for that!
11379 }
11380 break;
11381
11382 case MK_field:
11383 {
11384 unsigned ix = key.index;
11385 for (tree field = TYPE_FIELDS (ctx);
11386 field; field = DECL_CHAIN (field))
11387 {
11388 tree finner = STRIP_TEMPLATE (field);
11389 if (TREE_CODE (finner) == TREE_CODE (inner))
11390 if (!ix--)
11391 {
11392 existing = field;
11393 break;
11394 }
11395 }
11396 }
11397 break;
11398
11399 case MK_vtable:
11400 {
11401 unsigned ix = key.index;
11402 for (tree vtable = CLASSTYPE_VTABLES (ctx);
11403 vtable; vtable = DECL_CHAIN (vtable))
11404 if (!ix--)
11405 {
11406 existing = vtable;
11407 break;
11408 }
11409 }
11410 break;
11411
11412 case MK_as_base:
11413 {
11414 tree as_base = CLASSTYPE_AS_BASE (ctx);
11415 if (as_base && as_base != ctx)
11416 existing = TYPE_NAME (as_base);
11417 }
11418 break;
11419
11420 case MK_local_friend:
11421 {
11422 unsigned ix = key.index;
11423 for (tree decls = CLASSTYPE_DECL_LIST (ctx);
11424 decls; decls = TREE_CHAIN (decls))
11425 if (!TREE_PURPOSE (decls) && !ix--)
11426 {
11427 existing
11428 = friend_from_decl_list (TREE_VALUE (decls));
11429 break;
11430 }
11431 }
11432 break;
11433 }
11434
11435 if (existing && mk < MK_indirect_lwm && mk != MK_partial
11436 && TREE_CODE (decl) == TEMPLATE_DECL
11437 && !DECL_MEMBER_TEMPLATE_P (decl))
11438 {
11439 tree ti;
11440 if (DECL_IMPLICIT_TYPEDEF_P (existing))
11441 ti = TYPE_TEMPLATE_INFO (TREE_TYPE (existing));
11442 else
11443 ti = DECL_TEMPLATE_INFO (existing);
11444 existing = TI_TEMPLATE (ti);
11445 }
11446 }
11447 }
11448 }
11449 }
11450
11451 dump (dumper::MERGE)
11452 && dump ("Read:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
11453 existing ? "matched" : kind, TREE_CODE (decl), decl);
11454
11455 return existing;
11456 }
11457
11458 void
11459 trees_out::binfo_mergeable (tree binfo)
11460 {
11461 tree dom = binfo;
11462 while (tree parent = BINFO_INHERITANCE_CHAIN (dom))
11463 dom = parent;
11464 tree type = BINFO_TYPE (dom);
11465 gcc_checking_assert (TYPE_BINFO (type) == dom);
11466 tree_node (type);
11467 if (streaming_p ())
11468 {
11469 unsigned ix = 0;
11470 for (; dom != binfo; dom = TREE_CHAIN (dom))
11471 ix++;
11472 u (ix);
11473 }
11474 }
11475
11476 unsigned
11477 trees_in::binfo_mergeable (tree *type)
11478 {
11479 *type = tree_node ();
11480 return u ();
11481 }
11482
11483 /* DECL is a just streamed mergeable decl that should match EXISTING. Check
11484 it does and issue an appropriate diagnostic if not. Merge any
11485 bits from DECL to EXISTING. This is stricter matching than
11486 decls_match, because we can rely on ODR-sameness, and we cannot use
11487 decls_match because it can cause instantiations of constraints. */
11488
11489 bool
11490 trees_in::is_matching_decl (tree existing, tree decl, bool is_typedef)
11491 {
11492 // FIXME: We should probably do some duplicate decl-like stuff here
11493 // (beware, default parms should be the same?) Can we just call
11494 // duplicate_decls and teach it how to handle the module-specific
11495 // permitted/required duplications?
11496
11497 // We know at this point that the decls have matched by key, so we
11498 // can elide some of the checking
11499 gcc_checking_assert (TREE_CODE (existing) == TREE_CODE (decl));
11500
11501 tree d_inner = decl;
11502 tree e_inner = existing;
11503 if (TREE_CODE (decl) == TEMPLATE_DECL)
11504 {
11505 d_inner = DECL_TEMPLATE_RESULT (d_inner);
11506 e_inner = DECL_TEMPLATE_RESULT (e_inner);
11507 gcc_checking_assert (TREE_CODE (e_inner) == TREE_CODE (d_inner));
11508 }
11509
11510 if (TREE_CODE (d_inner) == FUNCTION_DECL)
11511 {
11512 tree e_ret = fndecl_declared_return_type (existing);
11513 tree d_ret = fndecl_declared_return_type (decl);
11514
11515 if (decl != d_inner && DECL_NAME (d_inner) == fun_identifier
11516 && LAMBDA_TYPE_P (DECL_CONTEXT (d_inner)))
11517 /* This has a recursive type that will compare different. */;
11518 else if (!same_type_p (d_ret, e_ret))
11519 goto mismatch;
11520
11521 tree e_type = TREE_TYPE (e_inner);
11522 tree d_type = TREE_TYPE (d_inner);
11523
11524 if (DECL_EXTERN_C_P (d_inner) != DECL_EXTERN_C_P (e_inner))
11525 goto mismatch;
11526
11527 for (tree e_args = TYPE_ARG_TYPES (e_type),
11528 d_args = TYPE_ARG_TYPES (d_type);
11529 e_args != d_args && (e_args || d_args);
11530 e_args = TREE_CHAIN (e_args), d_args = TREE_CHAIN (d_args))
11531 {
11532 if (!(e_args && d_args))
11533 goto mismatch;
11534
11535 if (!same_type_p (TREE_VALUE (d_args), TREE_VALUE (e_args)))
11536 goto mismatch;
11537
11538 // FIXME: Check default values
11539 }
11540
11541 /* If EXISTING has an undeduced or uninstantiated exception
11542 specification, but DECL does not, propagate the exception
11543 specification. Otherwise we end up asserting or trying to
11544 instantiate it in the middle of loading. */
11545 tree e_spec = TYPE_RAISES_EXCEPTIONS (e_type);
11546 tree d_spec = TYPE_RAISES_EXCEPTIONS (d_type);
11547 if (DEFERRED_NOEXCEPT_SPEC_P (e_spec))
11548 {
11549 if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11550 || (UNEVALUATED_NOEXCEPT_SPEC_P (e_spec)
11551 && !UNEVALUATED_NOEXCEPT_SPEC_P (d_spec)))
11552 {
11553 dump (dumper::MERGE)
11554 && dump ("Propagating instantiated noexcept to %N", existing);
11555 TREE_TYPE (existing) = d_type;
11556
11557 /* Propagate to existing clones. */
11558 tree clone;
11559 FOR_EACH_CLONE (clone, existing)
11560 {
11561 if (TREE_TYPE (clone) == e_type)
11562 TREE_TYPE (clone) = d_type;
11563 else
11564 TREE_TYPE (clone)
11565 = build_exception_variant (TREE_TYPE (clone), d_spec);
11566 }
11567 }
11568 }
11569 else if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11570 && !comp_except_specs (d_spec, e_spec, ce_type))
11571 goto mismatch;
11572
11573 /* Similarly if EXISTING has an undeduced return type, but DECL's
11574 is already deduced. */
11575 if (undeduced_auto_decl (existing) && !undeduced_auto_decl (decl))
11576 {
11577 dump (dumper::MERGE)
11578 && dump ("Propagating deduced return type to %N", existing);
11579 TREE_TYPE (existing) = change_return_type (TREE_TYPE (d_type), e_type);
11580 }
11581 }
11582 else if (is_typedef)
11583 {
11584 if (!DECL_ORIGINAL_TYPE (e_inner)
11585 || !same_type_p (DECL_ORIGINAL_TYPE (d_inner),
11586 DECL_ORIGINAL_TYPE (e_inner)))
11587 goto mismatch;
11588 }
11589 /* Using cp_tree_equal because we can meet TYPE_ARGUMENT_PACKs
11590 here. I suspect the entities that directly do that are things
11591 that shouldn't go to duplicate_decls (FIELD_DECLs etc). */
11592 else if (!cp_tree_equal (TREE_TYPE (decl), TREE_TYPE (existing)))
11593 {
11594 mismatch:
11595 if (DECL_IS_UNDECLARED_BUILTIN (existing))
11596 /* Just like duplicate_decls, presum the user knows what
11597 they're doing in overriding a builtin. */
11598 TREE_TYPE (existing) = TREE_TYPE (decl);
11599 else if (decl_function_context (decl))
11600 /* The type of a mergeable local entity (such as a function scope
11601 capturing lambda's closure type fields) can depend on an
11602 unmergeable local entity (such as a local variable), so type
11603 equality isn't feasible in general for local entities. */;
11604 else
11605 {
11606 // FIXME:QOI Might be template specialization from a module,
11607 // not necessarily global module
11608 error_at (DECL_SOURCE_LOCATION (decl),
11609 "conflicting global module declaration %#qD", decl);
11610 inform (DECL_SOURCE_LOCATION (existing),
11611 "existing declaration %#qD", existing);
11612 return false;
11613 }
11614 }
11615
11616 if (DECL_IS_UNDECLARED_BUILTIN (existing)
11617 && !DECL_IS_UNDECLARED_BUILTIN (decl))
11618 {
11619 /* We're matching a builtin that the user has yet to declare.
11620 We are the one! This is very much duplicate-decl
11621 shenanigans. */
11622 DECL_SOURCE_LOCATION (existing) = DECL_SOURCE_LOCATION (decl);
11623 if (TREE_CODE (decl) != TYPE_DECL)
11624 {
11625 /* Propagate exceptions etc. */
11626 TREE_TYPE (existing) = TREE_TYPE (decl);
11627 TREE_NOTHROW (existing) = TREE_NOTHROW (decl);
11628 }
11629 /* This is actually an import! */
11630 DECL_MODULE_IMPORT_P (existing) = true;
11631
11632 /* Yay, sliced! */
11633 existing->base = decl->base;
11634
11635 if (TREE_CODE (decl) == FUNCTION_DECL)
11636 {
11637 /* Ew :( */
11638 memcpy (&existing->decl_common.size,
11639 &decl->decl_common.size,
11640 (offsetof (tree_decl_common, pt_uid)
11641 - offsetof (tree_decl_common, size)));
11642 auto bltin_class = DECL_BUILT_IN_CLASS (decl);
11643 existing->function_decl.built_in_class = bltin_class;
11644 auto fncode = DECL_UNCHECKED_FUNCTION_CODE (decl);
11645 DECL_UNCHECKED_FUNCTION_CODE (existing) = fncode;
11646 if (existing->function_decl.built_in_class == BUILT_IN_NORMAL)
11647 {
11648 if (builtin_decl_explicit_p (built_in_function (fncode)))
11649 switch (fncode)
11650 {
11651 case BUILT_IN_STPCPY:
11652 set_builtin_decl_implicit_p
11653 (built_in_function (fncode), true);
11654 break;
11655 default:
11656 set_builtin_decl_declared_p
11657 (built_in_function (fncode), true);
11658 break;
11659 }
11660 copy_attributes_to_builtin (decl);
11661 }
11662 }
11663 }
11664
11665 if (VAR_OR_FUNCTION_DECL_P (decl)
11666 && DECL_TEMPLATE_INSTANTIATED (decl))
11667 /* Don't instantiate again! */
11668 DECL_TEMPLATE_INSTANTIATED (existing) = true;
11669
11670 if (TREE_CODE (d_inner) == FUNCTION_DECL
11671 && DECL_DECLARED_INLINE_P (d_inner))
11672 DECL_DECLARED_INLINE_P (e_inner) = true;
11673 if (!DECL_EXTERNAL (d_inner))
11674 DECL_EXTERNAL (e_inner) = false;
11675
11676 // FIXME: Check default tmpl and fn parms here
11677
11678 return true;
11679 }
11680
11681 /* FN is an implicit member function that we've discovered is new to
11682 the class. Add it to the TYPE_FIELDS chain and the method vector.
11683 Reset the appropriate classtype lazy flag. */
11684
11685 bool
11686 trees_in::install_implicit_member (tree fn)
11687 {
11688 tree ctx = DECL_CONTEXT (fn);
11689 tree name = DECL_NAME (fn);
11690 /* We know these are synthesized, so the set of expected prototypes
11691 is quite restricted. We're not validating correctness, just
11692 distinguishing beteeen the small set of possibilities. */
11693 tree parm_type = TREE_VALUE (FUNCTION_FIRST_USER_PARMTYPE (fn));
11694 if (IDENTIFIER_CTOR_P (name))
11695 {
11696 if (CLASSTYPE_LAZY_DEFAULT_CTOR (ctx)
11697 && VOID_TYPE_P (parm_type))
11698 CLASSTYPE_LAZY_DEFAULT_CTOR (ctx) = false;
11699 else if (!TYPE_REF_P (parm_type))
11700 return false;
11701 else if (CLASSTYPE_LAZY_COPY_CTOR (ctx)
11702 && !TYPE_REF_IS_RVALUE (parm_type))
11703 CLASSTYPE_LAZY_COPY_CTOR (ctx) = false;
11704 else if (CLASSTYPE_LAZY_MOVE_CTOR (ctx))
11705 CLASSTYPE_LAZY_MOVE_CTOR (ctx) = false;
11706 else
11707 return false;
11708 }
11709 else if (IDENTIFIER_DTOR_P (name))
11710 {
11711 if (CLASSTYPE_LAZY_DESTRUCTOR (ctx))
11712 CLASSTYPE_LAZY_DESTRUCTOR (ctx) = false;
11713 else
11714 return false;
11715 if (DECL_VIRTUAL_P (fn))
11716 /* A virtual dtor should have been created when the class
11717 became complete. */
11718 return false;
11719 }
11720 else if (name == assign_op_identifier)
11721 {
11722 if (!TYPE_REF_P (parm_type))
11723 return false;
11724 else if (CLASSTYPE_LAZY_COPY_ASSIGN (ctx)
11725 && !TYPE_REF_IS_RVALUE (parm_type))
11726 CLASSTYPE_LAZY_COPY_ASSIGN (ctx) = false;
11727 else if (CLASSTYPE_LAZY_MOVE_ASSIGN (ctx))
11728 CLASSTYPE_LAZY_MOVE_ASSIGN (ctx) = false;
11729 else
11730 return false;
11731 }
11732 else
11733 return false;
11734
11735 dump (dumper::MERGE) && dump ("Adding implicit member %N", fn);
11736
11737 DECL_CHAIN (fn) = TYPE_FIELDS (ctx);
11738 TYPE_FIELDS (ctx) = fn;
11739
11740 add_method (ctx, fn, false);
11741
11742 /* Propagate TYPE_FIELDS. */
11743 fixup_type_variants (ctx);
11744
11745 return true;
11746 }
11747
11748 /* Return non-zero if DECL has a definition that would be interesting to
11749 write out. */
11750
11751 static bool
11752 has_definition (tree decl)
11753 {
11754 bool is_tmpl = TREE_CODE (decl) == TEMPLATE_DECL;
11755 if (is_tmpl)
11756 decl = DECL_TEMPLATE_RESULT (decl);
11757
11758 switch (TREE_CODE (decl))
11759 {
11760 default:
11761 break;
11762
11763 case FUNCTION_DECL:
11764 if (!DECL_SAVED_TREE (decl))
11765 /* Not defined. */
11766 break;
11767
11768 if (DECL_DECLARED_INLINE_P (decl))
11769 return true;
11770
11771 if (DECL_THIS_STATIC (decl)
11772 && (header_module_p ()
11773 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl))))
11774 /* GM static function. */
11775 return true;
11776
11777 if (DECL_TEMPLATE_INFO (decl))
11778 {
11779 int use_tpl = DECL_USE_TEMPLATE (decl);
11780
11781 // FIXME: Partial specializations have definitions too.
11782 if (use_tpl < 2)
11783 return true;
11784 }
11785 break;
11786
11787 case TYPE_DECL:
11788 {
11789 tree type = TREE_TYPE (decl);
11790 if (type == TYPE_MAIN_VARIANT (type)
11791 && decl == TYPE_NAME (type)
11792 && (TREE_CODE (type) == ENUMERAL_TYPE
11793 ? TYPE_VALUES (type) : TYPE_FIELDS (type)))
11794 return true;
11795 }
11796 break;
11797
11798 case VAR_DECL:
11799 /* DECL_INITIALIZED_P might not be set on a dependent VAR_DECL. */
11800 if (DECL_LANG_SPECIFIC (decl)
11801 && DECL_TEMPLATE_INFO (decl)
11802 && DECL_INITIAL (decl))
11803 return true;
11804 else
11805 {
11806 if (!DECL_INITIALIZED_P (decl))
11807 return false;
11808
11809 if (header_module_p ()
11810 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl)))
11811 /* GM static variable. */
11812 return true;
11813
11814 if (!TREE_CONSTANT (decl))
11815 return false;
11816
11817 return true;
11818 }
11819 break;
11820
11821 case CONCEPT_DECL:
11822 if (DECL_INITIAL (decl))
11823 return true;
11824
11825 break;
11826 }
11827
11828 return false;
11829 }
11830
11831 uintptr_t *
11832 trees_in::find_duplicate (tree existing)
11833 {
11834 if (!duplicates)
11835 return NULL;
11836
11837 return duplicates->get (existing);
11838 }
11839
11840 /* We're starting to read a duplicate DECL. EXISTING is the already
11841 known node. */
11842
11843 void
11844 trees_in::register_duplicate (tree decl, tree existing)
11845 {
11846 if (!duplicates)
11847 duplicates = new duplicate_hash_map (40);
11848
11849 bool existed;
11850 uintptr_t &slot = duplicates->get_or_insert (existing, &existed);
11851 gcc_checking_assert (!existed);
11852 slot = reinterpret_cast<uintptr_t> (decl);
11853
11854 if (TREE_CODE (decl) == TEMPLATE_DECL)
11855 /* Also register the DECL_TEMPLATE_RESULT as a duplicate so
11856 that passing decl's _RESULT to maybe_duplicate naturally
11857 gives us existing's _RESULT back. */
11858 register_duplicate (DECL_TEMPLATE_RESULT (decl),
11859 DECL_TEMPLATE_RESULT (existing));
11860 }
11861
11862 /* We've read a definition of MAYBE_EXISTING. If not a duplicate,
11863 return MAYBE_EXISTING (into which the definition should be
11864 installed). Otherwise return NULL if already known bad, or the
11865 duplicate we read (for ODR checking, or extracting additional merge
11866 information). */
11867
11868 tree
11869 trees_in::odr_duplicate (tree maybe_existing, bool has_defn)
11870 {
11871 tree res = NULL_TREE;
11872
11873 if (uintptr_t *dup = find_duplicate (maybe_existing))
11874 {
11875 if (!(*dup & 1))
11876 res = reinterpret_cast<tree> (*dup);
11877 }
11878 else
11879 res = maybe_existing;
11880
11881 assert_definition (maybe_existing, res && !has_defn);
11882
11883 // FIXME: We probably need to return the template, so that the
11884 // template header can be checked?
11885 return res ? STRIP_TEMPLATE (res) : NULL_TREE;
11886 }
11887
11888 /* The following writer functions rely on the current behaviour of
11889 depset::hash::add_dependency making the decl and defn depset nodes
11890 depend on eachother. That way we don't have to worry about seeding
11891 the tree map with named decls that cannot be looked up by name (I.e
11892 template and function parms). We know the decl and definition will
11893 be in the same cluster, which is what we want. */
11894
11895 void
11896 trees_out::write_function_def (tree decl)
11897 {
11898 tree_node (DECL_RESULT (decl));
11899 tree_node (DECL_INITIAL (decl));
11900 tree_node (DECL_SAVED_TREE (decl));
11901 tree_node (DECL_FRIEND_CONTEXT (decl));
11902
11903 constexpr_fundef *cexpr = retrieve_constexpr_fundef (decl);
11904
11905 if (streaming_p ())
11906 u (cexpr != nullptr);
11907 if (cexpr)
11908 {
11909 chained_decls (cexpr->parms);
11910 tree_node (cexpr->result);
11911 tree_node (cexpr->body);
11912 }
11913
11914 function* f = DECL_STRUCT_FUNCTION (decl);
11915
11916 if (streaming_p ())
11917 {
11918 unsigned flags = 0;
11919
11920 if (f)
11921 flags |= 2;
11922 if (DECL_NOT_REALLY_EXTERN (decl))
11923 flags |= 1;
11924
11925 u (flags);
11926 }
11927
11928 if (state && f)
11929 {
11930 state->write_location (*this, f->function_start_locus);
11931 state->write_location (*this, f->function_end_locus);
11932 }
11933 }
11934
11935 void
11936 trees_out::mark_function_def (tree)
11937 {
11938 }
11939
11940 bool
11941 trees_in::read_function_def (tree decl, tree maybe_template)
11942 {
11943 dump () && dump ("Reading function definition %N", decl);
11944 tree result = tree_node ();
11945 tree initial = tree_node ();
11946 tree saved = tree_node ();
11947 tree context = tree_node ();
11948 constexpr_fundef cexpr;
11949 post_process_data pdata {};
11950 pdata.decl = maybe_template;
11951
11952 tree maybe_dup = odr_duplicate (maybe_template, DECL_SAVED_TREE (decl));
11953 bool installing = maybe_dup && !DECL_SAVED_TREE (decl);
11954
11955 if (u ())
11956 {
11957 cexpr.parms = chained_decls ();
11958 cexpr.result = tree_node ();
11959 cexpr.body = tree_node ();
11960 cexpr.decl = decl;
11961 }
11962 else
11963 cexpr.decl = NULL_TREE;
11964
11965 unsigned flags = u ();
11966
11967 if (flags & 2)
11968 {
11969 pdata.start_locus = state->read_location (*this);
11970 pdata.end_locus = state->read_location (*this);
11971 }
11972
11973 if (get_overrun ())
11974 return NULL_TREE;
11975
11976 if (installing)
11977 {
11978 DECL_NOT_REALLY_EXTERN (decl) = flags & 1;
11979 DECL_RESULT (decl) = result;
11980 DECL_INITIAL (decl) = initial;
11981 DECL_SAVED_TREE (decl) = saved;
11982
11983 if (context)
11984 SET_DECL_FRIEND_CONTEXT (decl, context);
11985 if (cexpr.decl)
11986 register_constexpr_fundef (cexpr);
11987 post_process (pdata);
11988 }
11989 else if (maybe_dup)
11990 {
11991 // FIXME:QOI Check matching defn
11992 }
11993
11994 return true;
11995 }
11996
11997 /* Also for CONCEPT_DECLs. */
11998
11999 void
12000 trees_out::write_var_def (tree decl)
12001 {
12002 tree init = DECL_INITIAL (decl);
12003 tree_node (init);
12004 if (!init)
12005 {
12006 tree dyn_init = NULL_TREE;
12007
12008 /* We only need to write initializers in header modules. */
12009 if (header_module_p () && DECL_NONTRIVIALLY_INITIALIZED_P (decl))
12010 {
12011 dyn_init = value_member (decl,
12012 CP_DECL_THREAD_LOCAL_P (decl)
12013 ? tls_aggregates : static_aggregates);
12014 gcc_checking_assert (dyn_init);
12015 /* Mark it so write_inits knows this is needed. */
12016 TREE_LANG_FLAG_0 (dyn_init) = true;
12017 dyn_init = TREE_PURPOSE (dyn_init);
12018 }
12019 tree_node (dyn_init);
12020 }
12021 }
12022
12023 void
12024 trees_out::mark_var_def (tree)
12025 {
12026 }
12027
12028 bool
12029 trees_in::read_var_def (tree decl, tree maybe_template)
12030 {
12031 /* Do not mark the virtual table entries as used. */
12032 bool vtable = VAR_P (decl) && DECL_VTABLE_OR_VTT_P (decl);
12033 unused += vtable;
12034 tree init = tree_node ();
12035 tree dyn_init = init ? NULL_TREE : tree_node ();
12036 unused -= vtable;
12037
12038 if (get_overrun ())
12039 return false;
12040
12041 bool initialized = (VAR_P (decl) ? bool (DECL_INITIALIZED_P (decl))
12042 : bool (DECL_INITIAL (decl)));
12043 tree maybe_dup = odr_duplicate (maybe_template, initialized);
12044 bool installing = maybe_dup && !initialized;
12045 if (installing)
12046 {
12047 if (DECL_EXTERNAL (decl))
12048 DECL_NOT_REALLY_EXTERN (decl) = true;
12049 if (VAR_P (decl))
12050 {
12051 DECL_INITIALIZED_P (decl) = true;
12052 if (maybe_dup && DECL_INITIALIZED_BY_CONSTANT_EXPRESSION_P (maybe_dup))
12053 DECL_INITIALIZED_BY_CONSTANT_EXPRESSION_P (decl) = true;
12054 if (DECL_IMPLICIT_INSTANTIATION (decl)
12055 || (DECL_CLASS_SCOPE_P (decl)
12056 && !DECL_VTABLE_OR_VTT_P (decl)
12057 && !DECL_TEMPLATE_INFO (decl)))
12058 note_vague_linkage_variable (decl);
12059 }
12060 DECL_INITIAL (decl) = init;
12061 if (!dyn_init)
12062 ;
12063 else if (CP_DECL_THREAD_LOCAL_P (decl))
12064 tls_aggregates = tree_cons (dyn_init, decl, tls_aggregates);
12065 else
12066 static_aggregates = tree_cons (dyn_init, decl, static_aggregates);
12067 }
12068 else if (maybe_dup)
12069 {
12070 // FIXME:QOI Check matching defn
12071 }
12072
12073 return true;
12074 }
12075
12076 /* If MEMBER doesn't have an independent life outside the class,
12077 return it (or its TEMPLATE_DECL). Otherwise NULL. */
12078
12079 static tree
12080 member_owned_by_class (tree member)
12081 {
12082 gcc_assert (DECL_P (member));
12083
12084 /* Clones are owned by their origin. */
12085 if (DECL_CLONED_FUNCTION_P (member))
12086 return NULL;
12087
12088 if (TREE_CODE (member) == FIELD_DECL)
12089 /* FIELD_DECLS can have template info in some cases. We always
12090 want the FIELD_DECL though, as there's never a TEMPLATE_DECL
12091 wrapping them. */
12092 return member;
12093
12094 int use_tpl = -1;
12095 if (tree ti = node_template_info (member, use_tpl))
12096 {
12097 // FIXME: Don't bail on things that CANNOT have their own
12098 // template header. No, make sure they're in the same cluster.
12099 if (use_tpl > 0)
12100 return NULL_TREE;
12101
12102 if (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == member)
12103 member = TI_TEMPLATE (ti);
12104 }
12105 return member;
12106 }
12107
12108 void
12109 trees_out::write_class_def (tree defn)
12110 {
12111 gcc_assert (DECL_P (defn));
12112 if (streaming_p ())
12113 dump () && dump ("Writing class definition %N", defn);
12114
12115 tree type = TREE_TYPE (defn);
12116 tree_node (TYPE_SIZE (type));
12117 tree_node (TYPE_SIZE_UNIT (type));
12118 tree_node (TYPE_VFIELD (type));
12119 tree_node (TYPE_BINFO (type));
12120
12121 vec_chained_decls (TYPE_FIELDS (type));
12122
12123 /* Every class but __as_base has a type-specific. */
12124 gcc_checking_assert (!TYPE_LANG_SPECIFIC (type) == IS_FAKE_BASE_TYPE (type));
12125
12126 if (TYPE_LANG_SPECIFIC (type))
12127 {
12128 {
12129 vec<tree, va_gc> *v = CLASSTYPE_MEMBER_VEC (type);
12130 if (!v)
12131 {
12132 gcc_checking_assert (!streaming_p ());
12133 /* Force a class vector. */
12134 v = set_class_bindings (type, -1);
12135 gcc_checking_assert (v);
12136 }
12137
12138 unsigned len = v->length ();
12139 if (streaming_p ())
12140 u (len);
12141 for (unsigned ix = 0; ix != len; ix++)
12142 {
12143 tree m = (*v)[ix];
12144 if (TREE_CODE (m) == TYPE_DECL
12145 && DECL_ARTIFICIAL (m)
12146 && TYPE_STUB_DECL (TREE_TYPE (m)) == m)
12147 /* This is a using-decl for a type, or an anonymous
12148 struct (maybe with a typedef name). Write the type. */
12149 m = TREE_TYPE (m);
12150 tree_node (m);
12151 }
12152 }
12153 tree_node (CLASSTYPE_LAMBDA_EXPR (type));
12154
12155 /* TYPE_CONTAINS_VPTR_P looks at the vbase vector, which the
12156 reader won't know at this point. */
12157 int has_vptr = TYPE_CONTAINS_VPTR_P (type);
12158
12159 if (streaming_p ())
12160 {
12161 unsigned nvbases = vec_safe_length (CLASSTYPE_VBASECLASSES (type));
12162 u (nvbases);
12163 i (has_vptr);
12164 }
12165
12166 if (has_vptr)
12167 {
12168 tree_vec (CLASSTYPE_PURE_VIRTUALS (type));
12169 tree_pair_vec (CLASSTYPE_VCALL_INDICES (type));
12170 tree_node (CLASSTYPE_KEY_METHOD (type));
12171 }
12172 }
12173
12174 if (TYPE_LANG_SPECIFIC (type))
12175 {
12176 tree_node (CLASSTYPE_PRIMARY_BINFO (type));
12177
12178 tree as_base = CLASSTYPE_AS_BASE (type);
12179 if (as_base)
12180 as_base = TYPE_NAME (as_base);
12181 tree_node (as_base);
12182
12183 /* Write the vtables. */
12184 tree vtables = CLASSTYPE_VTABLES (type);
12185 vec_chained_decls (vtables);
12186 for (; vtables; vtables = TREE_CHAIN (vtables))
12187 write_definition (vtables);
12188
12189 /* Write the friend classes. */
12190 tree_list (CLASSTYPE_FRIEND_CLASSES (type), false);
12191
12192 /* Write the friend functions. */
12193 for (tree friends = DECL_FRIENDLIST (defn);
12194 friends; friends = TREE_CHAIN (friends))
12195 {
12196 /* Name of these friends. */
12197 tree_node (TREE_PURPOSE (friends));
12198 tree_list (TREE_VALUE (friends), false);
12199 }
12200 /* End of friend fns. */
12201 tree_node (NULL_TREE);
12202
12203 /* Write the decl list. */
12204 tree_list (CLASSTYPE_DECL_LIST (type), true);
12205
12206 if (TYPE_CONTAINS_VPTR_P (type))
12207 {
12208 /* Write the thunks. */
12209 for (tree decls = TYPE_FIELDS (type);
12210 decls; decls = DECL_CHAIN (decls))
12211 if (TREE_CODE (decls) == FUNCTION_DECL
12212 && DECL_VIRTUAL_P (decls)
12213 && DECL_THUNKS (decls))
12214 {
12215 tree_node (decls);
12216 /* Thunks are always unique, so chaining is ok. */
12217 chained_decls (DECL_THUNKS (decls));
12218 }
12219 tree_node (NULL_TREE);
12220 }
12221 }
12222 }
12223
12224 void
12225 trees_out::mark_class_member (tree member, bool do_defn)
12226 {
12227 gcc_assert (DECL_P (member));
12228
12229 member = member_owned_by_class (member);
12230 if (member)
12231 mark_declaration (member, do_defn && has_definition (member));
12232 }
12233
12234 void
12235 trees_out::mark_class_def (tree defn)
12236 {
12237 gcc_assert (DECL_P (defn));
12238 tree type = TREE_TYPE (defn);
12239 /* Mark the class members that are not type-decls and cannot have
12240 independent definitions. */
12241 for (tree member = TYPE_FIELDS (type); member; member = DECL_CHAIN (member))
12242 if (TREE_CODE (member) == FIELD_DECL
12243 || TREE_CODE (member) == USING_DECL
12244 /* A cloned enum-decl from 'using enum unrelated;' */
12245 || (TREE_CODE (member) == CONST_DECL
12246 && DECL_CONTEXT (member) == type))
12247 {
12248 mark_class_member (member);
12249 if (TREE_CODE (member) == FIELD_DECL)
12250 if (tree repr = DECL_BIT_FIELD_REPRESENTATIVE (member))
12251 /* If we're marking a class template definition, then
12252 this'll contain the width (as set by grokbitfield)
12253 instead of a decl. */
12254 if (DECL_P (repr))
12255 mark_declaration (repr, false);
12256 }
12257
12258 /* Mark the binfo hierarchy. */
12259 for (tree child = TYPE_BINFO (type); child; child = TREE_CHAIN (child))
12260 mark_by_value (child);
12261
12262 if (TYPE_LANG_SPECIFIC (type))
12263 {
12264 for (tree vtable = CLASSTYPE_VTABLES (type);
12265 vtable; vtable = TREE_CHAIN (vtable))
12266 mark_declaration (vtable, true);
12267
12268 if (TYPE_CONTAINS_VPTR_P (type))
12269 /* Mark the thunks, they belong to the class definition,
12270 /not/ the thunked-to function. */
12271 for (tree decls = TYPE_FIELDS (type);
12272 decls; decls = DECL_CHAIN (decls))
12273 if (TREE_CODE (decls) == FUNCTION_DECL)
12274 for (tree thunks = DECL_THUNKS (decls);
12275 thunks; thunks = DECL_CHAIN (thunks))
12276 mark_declaration (thunks, false);
12277 }
12278 }
12279
12280 /* Nop sorting, needed for resorting the member vec. */
12281
12282 static void
12283 nop (void *, void *, void *)
12284 {
12285 }
12286
12287 bool
12288 trees_in::read_class_def (tree defn, tree maybe_template)
12289 {
12290 gcc_assert (DECL_P (defn));
12291 dump () && dump ("Reading class definition %N", defn);
12292 tree type = TREE_TYPE (defn);
12293 tree size = tree_node ();
12294 tree size_unit = tree_node ();
12295 tree vfield = tree_node ();
12296 tree binfo = tree_node ();
12297 vec<tree, va_gc> *vbase_vec = NULL;
12298 vec<tree, va_gc> *member_vec = NULL;
12299 vec<tree, va_gc> *pure_virts = NULL;
12300 vec<tree_pair_s, va_gc> *vcall_indices = NULL;
12301 tree key_method = NULL_TREE;
12302 tree lambda = NULL_TREE;
12303
12304 /* Read the fields. */
12305 vec<tree, va_heap> *fields = vec_chained_decls ();
12306
12307 if (TYPE_LANG_SPECIFIC (type))
12308 {
12309 if (unsigned len = u ())
12310 {
12311 vec_alloc (member_vec, len);
12312 for (unsigned ix = 0; ix != len; ix++)
12313 {
12314 tree m = tree_node ();
12315 if (get_overrun ())
12316 break;
12317 if (TYPE_P (m))
12318 m = TYPE_STUB_DECL (m);
12319 member_vec->quick_push (m);
12320 }
12321 }
12322 lambda = tree_node ();
12323
12324 if (!get_overrun ())
12325 {
12326 unsigned nvbases = u ();
12327 if (nvbases)
12328 {
12329 vec_alloc (vbase_vec, nvbases);
12330 for (tree child = binfo; child; child = TREE_CHAIN (child))
12331 if (BINFO_VIRTUAL_P (child))
12332 vbase_vec->quick_push (child);
12333 }
12334 }
12335
12336 if (!get_overrun ())
12337 {
12338 int has_vptr = i ();
12339 if (has_vptr)
12340 {
12341 pure_virts = tree_vec ();
12342 vcall_indices = tree_pair_vec ();
12343 key_method = tree_node ();
12344 }
12345 }
12346 }
12347
12348 tree maybe_dup = odr_duplicate (maybe_template, TYPE_SIZE (type));
12349 bool installing = maybe_dup && !TYPE_SIZE (type);
12350 if (installing)
12351 {
12352 if (maybe_dup != defn)
12353 {
12354 // FIXME: This is needed on other defns too, almost
12355 // duplicate-decl like? See is_matching_decl too.
12356 /* Copy flags from the duplicate. */
12357 tree type_dup = TREE_TYPE (maybe_dup);
12358
12359 /* Core pieces. */
12360 TYPE_MODE_RAW (type) = TYPE_MODE_RAW (type_dup);
12361 SET_DECL_MODE (defn, DECL_MODE (maybe_dup));
12362 TREE_ADDRESSABLE (type) = TREE_ADDRESSABLE (type_dup);
12363 DECL_SIZE (defn) = DECL_SIZE (maybe_dup);
12364 DECL_SIZE_UNIT (defn) = DECL_SIZE_UNIT (maybe_dup);
12365 DECL_ALIGN_RAW (defn) = DECL_ALIGN_RAW (maybe_dup);
12366 DECL_WARN_IF_NOT_ALIGN_RAW (defn)
12367 = DECL_WARN_IF_NOT_ALIGN_RAW (maybe_dup);
12368 DECL_USER_ALIGN (defn) = DECL_USER_ALIGN (maybe_dup);
12369
12370 /* C++ pieces. */
12371 TYPE_POLYMORPHIC_P (type) = TYPE_POLYMORPHIC_P (type_dup);
12372 TYPE_HAS_USER_CONSTRUCTOR (type)
12373 = TYPE_HAS_USER_CONSTRUCTOR (type_dup);
12374 TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type)
12375 = TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type_dup);
12376
12377 if (auto ls = TYPE_LANG_SPECIFIC (type_dup))
12378 {
12379 if (TYPE_LANG_SPECIFIC (type))
12380 {
12381 CLASSTYPE_BEFRIENDING_CLASSES (type_dup)
12382 = CLASSTYPE_BEFRIENDING_CLASSES (type);
12383 if (!ANON_AGGR_TYPE_P (type))
12384 CLASSTYPE_TYPEINFO_VAR (type_dup)
12385 = CLASSTYPE_TYPEINFO_VAR (type);
12386 }
12387 for (tree v = type; v; v = TYPE_NEXT_VARIANT (v))
12388 TYPE_LANG_SPECIFIC (v) = ls;
12389 }
12390 }
12391
12392 TYPE_SIZE (type) = size;
12393 TYPE_SIZE_UNIT (type) = size_unit;
12394
12395 if (fields)
12396 {
12397 tree *chain = &TYPE_FIELDS (type);
12398 unsigned len = fields->length ();
12399 for (unsigned ix = 0; ix != len; ix++)
12400 {
12401 tree decl = (*fields)[ix];
12402
12403 if (!decl)
12404 {
12405 /* An anonymous struct with typedef name. */
12406 tree tdef = (*fields)[ix+1];
12407 decl = TYPE_STUB_DECL (TREE_TYPE (tdef));
12408 gcc_checking_assert (IDENTIFIER_ANON_P (DECL_NAME (decl))
12409 && decl != tdef);
12410 }
12411
12412 gcc_checking_assert (!*chain == !DECL_CLONED_FUNCTION_P (decl));
12413 *chain = decl;
12414 chain = &DECL_CHAIN (decl);
12415
12416 if (TREE_CODE (decl) == FIELD_DECL
12417 && ANON_AGGR_TYPE_P (TREE_TYPE (decl)))
12418 {
12419 tree anon_type = TYPE_MAIN_VARIANT (TREE_TYPE (decl));
12420 if (DECL_NAME (defn) == as_base_identifier)
12421 /* ANON_AGGR_TYPE_FIELD should already point to the
12422 original FIELD_DECL; don't overwrite it to point
12423 to the as-base FIELD_DECL copy. */
12424 gcc_checking_assert (ANON_AGGR_TYPE_FIELD (anon_type));
12425 else
12426 ANON_AGGR_TYPE_FIELD (anon_type) = decl;
12427 }
12428
12429 if (TREE_CODE (decl) == USING_DECL
12430 && TREE_CODE (USING_DECL_SCOPE (decl)) == RECORD_TYPE)
12431 {
12432 /* Reconstruct DECL_ACCESS. */
12433 tree decls = USING_DECL_DECLS (decl);
12434 tree access = declared_access (decl);
12435
12436 for (ovl_iterator iter (decls); iter; ++iter)
12437 {
12438 tree d = *iter;
12439
12440 retrofit_lang_decl (d);
12441 tree list = DECL_ACCESS (d);
12442
12443 if (!purpose_member (type, list))
12444 DECL_ACCESS (d) = tree_cons (type, access, list);
12445 }
12446 }
12447 }
12448 }
12449
12450 TYPE_VFIELD (type) = vfield;
12451 TYPE_BINFO (type) = binfo;
12452
12453 if (TYPE_LANG_SPECIFIC (type))
12454 {
12455 CLASSTYPE_LAMBDA_EXPR (type) = lambda;
12456
12457 CLASSTYPE_MEMBER_VEC (type) = member_vec;
12458 CLASSTYPE_PURE_VIRTUALS (type) = pure_virts;
12459 CLASSTYPE_VCALL_INDICES (type) = vcall_indices;
12460
12461 CLASSTYPE_KEY_METHOD (type) = key_method;
12462
12463 CLASSTYPE_VBASECLASSES (type) = vbase_vec;
12464
12465 /* Resort the member vector. */
12466 resort_type_member_vec (member_vec, NULL, nop, NULL);
12467 }
12468 }
12469 else if (maybe_dup)
12470 {
12471 // FIXME:QOI Check matching defn
12472 }
12473
12474 if (TYPE_LANG_SPECIFIC (type))
12475 {
12476 tree primary = tree_node ();
12477 tree as_base = tree_node ();
12478
12479 if (as_base)
12480 as_base = TREE_TYPE (as_base);
12481
12482 /* Read the vtables. */
12483 vec<tree, va_heap> *vtables = vec_chained_decls ();
12484 if (vtables)
12485 {
12486 unsigned len = vtables->length ();
12487 for (unsigned ix = 0; ix != len; ix++)
12488 {
12489 tree vtable = (*vtables)[ix];
12490 read_var_def (vtable, vtable);
12491 }
12492 }
12493
12494 tree friend_classes = tree_list (false);
12495 tree friend_functions = NULL_TREE;
12496 for (tree *chain = &friend_functions;
12497 tree name = tree_node (); chain = &TREE_CHAIN (*chain))
12498 {
12499 tree val = tree_list (false);
12500 *chain = build_tree_list (name, val);
12501 }
12502 tree decl_list = tree_list (true);
12503
12504 if (installing)
12505 {
12506 CLASSTYPE_PRIMARY_BINFO (type) = primary;
12507 CLASSTYPE_AS_BASE (type) = as_base;
12508
12509 if (vtables)
12510 {
12511 if ((!CLASSTYPE_KEY_METHOD (type)
12512 /* Sneaky user may have defined it inline
12513 out-of-class. */
12514 || DECL_DECLARED_INLINE_P (CLASSTYPE_KEY_METHOD (type)))
12515 /* An imported non-template class attached to a module
12516 doesn't need to have its vtables emitted here. */
12517 && (CLASSTYPE_USE_TEMPLATE (type)
12518 || !DECL_MODULE_ATTACH_P (defn)))
12519 vec_safe_push (keyed_classes, type);
12520 unsigned len = vtables->length ();
12521 tree *chain = &CLASSTYPE_VTABLES (type);
12522 for (unsigned ix = 0; ix != len; ix++)
12523 {
12524 tree vtable = (*vtables)[ix];
12525 gcc_checking_assert (!*chain);
12526 *chain = vtable;
12527 chain = &DECL_CHAIN (vtable);
12528 }
12529 }
12530 CLASSTYPE_FRIEND_CLASSES (type) = friend_classes;
12531 DECL_FRIENDLIST (defn) = friend_functions;
12532 CLASSTYPE_DECL_LIST (type) = decl_list;
12533
12534 for (; friend_classes; friend_classes = TREE_CHAIN (friend_classes))
12535 {
12536 tree f = TREE_VALUE (friend_classes);
12537 if (TREE_CODE (f) == TEMPLATE_DECL)
12538 f = TREE_TYPE (f);
12539
12540 if (CLASS_TYPE_P (f))
12541 {
12542 CLASSTYPE_BEFRIENDING_CLASSES (f)
12543 = tree_cons (NULL_TREE, type,
12544 CLASSTYPE_BEFRIENDING_CLASSES (f));
12545 dump () && dump ("Class %N befriending %C:%N",
12546 type, TREE_CODE (f), f);
12547 }
12548 }
12549
12550 for (; friend_functions;
12551 friend_functions = TREE_CHAIN (friend_functions))
12552 for (tree friend_decls = TREE_VALUE (friend_functions);
12553 friend_decls; friend_decls = TREE_CHAIN (friend_decls))
12554 {
12555 tree f = TREE_VALUE (friend_decls);
12556
12557 DECL_BEFRIENDING_CLASSES (f)
12558 = tree_cons (NULL_TREE, type, DECL_BEFRIENDING_CLASSES (f));
12559 dump () && dump ("Class %N befriending %C:%N",
12560 type, TREE_CODE (f), f);
12561 }
12562 }
12563
12564 if (TYPE_CONTAINS_VPTR_P (type))
12565 /* Read and install the thunks. */
12566 while (tree vfunc = tree_node ())
12567 {
12568 tree thunks = chained_decls ();
12569 if (installing)
12570 SET_DECL_THUNKS (vfunc, thunks);
12571 }
12572
12573 vec_free (vtables);
12574 }
12575
12576 /* Propagate to all variants. */
12577 if (installing)
12578 fixup_type_variants (type);
12579
12580 /* IS_FAKE_BASE_TYPE is inaccurate at this point, because if this is
12581 the fake base, we've not hooked it into the containing class's
12582 data structure yet. Fortunately it has a unique name. */
12583 if (installing
12584 && DECL_NAME (defn) != as_base_identifier
12585 && (!CLASSTYPE_TEMPLATE_INFO (type)
12586 || !uses_template_parms (TI_ARGS (CLASSTYPE_TEMPLATE_INFO (type)))))
12587 /* Emit debug info. It'd be nice to know if the interface TU
12588 already emitted this. */
12589 rest_of_type_compilation (type, !LOCAL_CLASS_P (type));
12590
12591 vec_free (fields);
12592
12593 return !get_overrun ();
12594 }
12595
12596 void
12597 trees_out::write_enum_def (tree decl)
12598 {
12599 tree type = TREE_TYPE (decl);
12600
12601 tree_node (TYPE_VALUES (type));
12602 /* Note that we stream TYPE_MIN/MAX_VALUE directly as part of the
12603 ENUMERAL_TYPE. */
12604 }
12605
12606 void
12607 trees_out::mark_enum_def (tree decl)
12608 {
12609 tree type = TREE_TYPE (decl);
12610
12611 for (tree values = TYPE_VALUES (type); values; values = TREE_CHAIN (values))
12612 {
12613 tree cst = TREE_VALUE (values);
12614 mark_by_value (cst);
12615 /* We must mark the init to avoid circularity in tt_enum_int. */
12616 if (tree init = DECL_INITIAL (cst))
12617 if (TREE_CODE (init) == INTEGER_CST)
12618 mark_by_value (init);
12619 }
12620 }
12621
12622 bool
12623 trees_in::read_enum_def (tree defn, tree maybe_template)
12624 {
12625 tree type = TREE_TYPE (defn);
12626 tree values = tree_node ();
12627
12628 if (get_overrun ())
12629 return false;
12630
12631 tree maybe_dup = odr_duplicate (maybe_template, TYPE_VALUES (type));
12632 bool installing = maybe_dup && !TYPE_VALUES (type);
12633
12634 if (installing)
12635 {
12636 TYPE_VALUES (type) = values;
12637 /* Note that we stream TYPE_MIN/MAX_VALUE directly as part of the
12638 ENUMERAL_TYPE. */
12639
12640 rest_of_type_compilation (type, DECL_NAMESPACE_SCOPE_P (defn));
12641 }
12642 else if (maybe_dup)
12643 {
12644 tree known = TYPE_VALUES (type);
12645 for (; known && values;
12646 known = TREE_CHAIN (known), values = TREE_CHAIN (values))
12647 {
12648 tree known_decl = TREE_VALUE (known);
12649 tree new_decl = TREE_VALUE (values);
12650
12651 if (DECL_NAME (known_decl) != DECL_NAME (new_decl))
12652 break;
12653
12654 new_decl = maybe_duplicate (new_decl);
12655
12656 if (!cp_tree_equal (DECL_INITIAL (known_decl),
12657 DECL_INITIAL (new_decl)))
12658 break;
12659 }
12660
12661 if (known || values)
12662 {
12663 error_at (DECL_SOURCE_LOCATION (maybe_dup),
12664 "definition of %qD does not match", maybe_dup);
12665 inform (DECL_SOURCE_LOCATION (defn),
12666 "existing definition %qD", defn);
12667
12668 tree known_decl = NULL_TREE, new_decl = NULL_TREE;
12669
12670 if (known)
12671 known_decl = TREE_VALUE (known);
12672 if (values)
12673 new_decl = maybe_duplicate (TREE_VALUE (values));
12674
12675 if (known_decl && new_decl)
12676 {
12677 inform (DECL_SOURCE_LOCATION (new_decl),
12678 "... this enumerator %qD", new_decl);
12679 inform (DECL_SOURCE_LOCATION (known_decl),
12680 "enumerator %qD does not match ...", known_decl);
12681 }
12682 else if (known_decl || new_decl)
12683 {
12684 tree extra = known_decl ? known_decl : new_decl;
12685 inform (DECL_SOURCE_LOCATION (extra),
12686 "additional enumerators beginning with %qD", extra);
12687 }
12688 else
12689 inform (DECL_SOURCE_LOCATION (maybe_dup),
12690 "enumeration range differs");
12691
12692 /* Mark it bad. */
12693 unmatched_duplicate (maybe_template);
12694 }
12695 }
12696
12697 return true;
12698 }
12699
12700 /* Write out the body of DECL. See above circularity note. */
12701
12702 void
12703 trees_out::write_definition (tree decl)
12704 {
12705 if (streaming_p ())
12706 {
12707 assert_definition (decl);
12708 dump ()
12709 && dump ("Writing definition %C:%N", TREE_CODE (decl), decl);
12710 }
12711 else
12712 dump (dumper::DEPEND)
12713 && dump ("Depending definition %C:%N", TREE_CODE (decl), decl);
12714
12715 again:
12716 switch (TREE_CODE (decl))
12717 {
12718 default:
12719 gcc_unreachable ();
12720
12721 case TEMPLATE_DECL:
12722 decl = DECL_TEMPLATE_RESULT (decl);
12723 goto again;
12724
12725 case FUNCTION_DECL:
12726 write_function_def (decl);
12727 break;
12728
12729 case TYPE_DECL:
12730 {
12731 tree type = TREE_TYPE (decl);
12732 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12733 && TYPE_NAME (type) == decl);
12734 if (TREE_CODE (type) == ENUMERAL_TYPE)
12735 write_enum_def (decl);
12736 else
12737 write_class_def (decl);
12738 }
12739 break;
12740
12741 case VAR_DECL:
12742 case CONCEPT_DECL:
12743 write_var_def (decl);
12744 break;
12745 }
12746 }
12747
12748 /* Mark a declaration for by-value walking. If DO_DEFN is true, mark
12749 its body too. */
12750
12751 void
12752 trees_out::mark_declaration (tree decl, bool do_defn)
12753 {
12754 mark_by_value (decl);
12755
12756 if (TREE_CODE (decl) == TEMPLATE_DECL)
12757 decl = DECL_TEMPLATE_RESULT (decl);
12758
12759 if (!do_defn)
12760 return;
12761
12762 switch (TREE_CODE (decl))
12763 {
12764 default:
12765 gcc_unreachable ();
12766
12767 case FUNCTION_DECL:
12768 mark_function_def (decl);
12769 break;
12770
12771 case TYPE_DECL:
12772 {
12773 tree type = TREE_TYPE (decl);
12774 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12775 && TYPE_NAME (type) == decl);
12776 if (TREE_CODE (type) == ENUMERAL_TYPE)
12777 mark_enum_def (decl);
12778 else
12779 mark_class_def (decl);
12780 }
12781 break;
12782
12783 case VAR_DECL:
12784 case CONCEPT_DECL:
12785 mark_var_def (decl);
12786 break;
12787 }
12788 }
12789
12790 /* Read in the body of DECL. See above circularity note. */
12791
12792 bool
12793 trees_in::read_definition (tree decl)
12794 {
12795 dump () && dump ("Reading definition %C %N", TREE_CODE (decl), decl);
12796
12797 tree maybe_template = decl;
12798
12799 again:
12800 switch (TREE_CODE (decl))
12801 {
12802 default:
12803 break;
12804
12805 case TEMPLATE_DECL:
12806 decl = DECL_TEMPLATE_RESULT (decl);
12807 goto again;
12808
12809 case FUNCTION_DECL:
12810 return read_function_def (decl, maybe_template);
12811
12812 case TYPE_DECL:
12813 {
12814 tree type = TREE_TYPE (decl);
12815 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12816 && TYPE_NAME (type) == decl);
12817 if (TREE_CODE (type) == ENUMERAL_TYPE)
12818 return read_enum_def (decl, maybe_template);
12819 else
12820 return read_class_def (decl, maybe_template);
12821 }
12822 break;
12823
12824 case VAR_DECL:
12825 case CONCEPT_DECL:
12826 return read_var_def (decl, maybe_template);
12827 }
12828
12829 return false;
12830 }
12831
12832 /* Lookup an maybe insert a slot for depset for KEY. */
12833
12834 depset **
12835 depset::hash::entity_slot (tree entity, bool insert)
12836 {
12837 traits::compare_type key (entity, NULL);
12838 depset **slot = find_slot_with_hash (key, traits::hash (key),
12839 insert ? INSERT : NO_INSERT);
12840
12841 return slot;
12842 }
12843
12844 depset **
12845 depset::hash::binding_slot (tree ctx, tree name, bool insert)
12846 {
12847 traits::compare_type key (ctx, name);
12848 depset **slot = find_slot_with_hash (key, traits::hash (key),
12849 insert ? INSERT : NO_INSERT);
12850
12851 return slot;
12852 }
12853
12854 depset *
12855 depset::hash::find_dependency (tree decl)
12856 {
12857 depset **slot = entity_slot (decl, false);
12858
12859 return slot ? *slot : NULL;
12860 }
12861
12862 depset *
12863 depset::hash::find_binding (tree ctx, tree name)
12864 {
12865 depset **slot = binding_slot (ctx, name, false);
12866
12867 return slot ? *slot : NULL;
12868 }
12869
12870 /* DECL is a newly discovered dependency. Create the depset, if it
12871 doesn't already exist. Add it to the worklist if so.
12872
12873 DECL will be an OVL_USING_P OVERLOAD, if it's from a binding that's
12874 a using decl.
12875
12876 We do not have to worry about adding the same dependency more than
12877 once. First it's harmless, but secondly the TREE_VISITED marking
12878 prevents us wanting to do it anyway. */
12879
12880 depset *
12881 depset::hash::make_dependency (tree decl, entity_kind ek)
12882 {
12883 /* Make sure we're being told consistent information. */
12884 gcc_checking_assert ((ek == EK_NAMESPACE)
12885 == (TREE_CODE (decl) == NAMESPACE_DECL
12886 && !DECL_NAMESPACE_ALIAS (decl)));
12887 gcc_checking_assert (ek != EK_BINDING && ek != EK_REDIRECT);
12888 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
12889 && (TREE_CODE (decl) != USING_DECL
12890 || TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL));
12891 gcc_checking_assert (!is_key_order ());
12892 if (ek == EK_USING)
12893 gcc_checking_assert (TREE_CODE (decl) == OVERLOAD);
12894
12895 if (TREE_CODE (decl) == TEMPLATE_DECL)
12896 /* The template should have copied these from its result decl. */
12897 gcc_checking_assert (DECL_MODULE_EXPORT_P (decl)
12898 == DECL_MODULE_EXPORT_P (DECL_TEMPLATE_RESULT (decl)));
12899
12900 depset **slot = entity_slot (decl, true);
12901 depset *dep = *slot;
12902 bool for_binding = ek == EK_FOR_BINDING;
12903
12904 if (!dep)
12905 {
12906 if ((DECL_IMPLICIT_TYPEDEF_P (decl)
12907 /* ... not an enum, for instance. */
12908 && RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
12909 && TYPE_LANG_SPECIFIC (TREE_TYPE (decl))
12910 && CLASSTYPE_USE_TEMPLATE (TREE_TYPE (decl)) == 2)
12911 || (VAR_P (decl)
12912 && DECL_LANG_SPECIFIC (decl)
12913 && DECL_USE_TEMPLATE (decl) == 2))
12914 {
12915 /* A partial or explicit specialization. Partial
12916 specializations might not be in the hash table, because
12917 there can be multiple differently-constrained variants.
12918
12919 template<typename T> class silly;
12920 template<typename T> requires true class silly {};
12921
12922 We need to find them, insert their TEMPLATE_DECL in the
12923 dep_hash, and then convert the dep we just found into a
12924 redirect. */
12925
12926 tree ti = get_template_info (decl);
12927 tree tmpl = TI_TEMPLATE (ti);
12928 tree partial = NULL_TREE;
12929 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (tmpl);
12930 spec; spec = TREE_CHAIN (spec))
12931 if (DECL_TEMPLATE_RESULT (TREE_VALUE (spec)) == decl)
12932 {
12933 partial = TREE_VALUE (spec);
12934 break;
12935 }
12936
12937 if (partial)
12938 {
12939 /* Eagerly create an empty redirect. The following
12940 make_dependency call could cause hash reallocation,
12941 and invalidate slot's value. */
12942 depset *redirect = make_entity (decl, EK_REDIRECT);
12943
12944 /* Redirects are never reached -- always snap to their target. */
12945 redirect->set_flag_bit<DB_UNREACHED_BIT> ();
12946
12947 *slot = redirect;
12948
12949 depset *tmpl_dep = make_dependency (partial, EK_PARTIAL);
12950 gcc_checking_assert (tmpl_dep->get_entity_kind () == EK_PARTIAL);
12951
12952 redirect->deps.safe_push (tmpl_dep);
12953
12954 return redirect;
12955 }
12956 }
12957
12958 bool has_def = ek != EK_USING && has_definition (decl);
12959 if (ek > EK_BINDING)
12960 ek = EK_DECL;
12961
12962 /* The only OVERLOADS we should see are USING decls from
12963 bindings. */
12964 *slot = dep = make_entity (decl, ek, has_def);
12965
12966 if (CHECKING_P && TREE_CODE (decl) == TEMPLATE_DECL)
12967 /* The template_result should otherwise not be in the
12968 table, or be an empty redirect (created above). */
12969 if (auto *eslot = entity_slot (DECL_TEMPLATE_RESULT (decl), false))
12970 gcc_checking_assert ((*eslot)->get_entity_kind () == EK_REDIRECT
12971 && !(*eslot)->deps.length ());
12972
12973 if (ek != EK_USING)
12974 {
12975 tree not_tmpl = STRIP_TEMPLATE (decl);
12976
12977 if (DECL_LANG_SPECIFIC (not_tmpl)
12978 && DECL_MODULE_IMPORT_P (not_tmpl))
12979 {
12980 /* Store the module number and index in cluster/section,
12981 so we don't have to look them up again. */
12982 unsigned index = import_entity_index (decl);
12983 module_state *from = import_entity_module (index);
12984 /* Remap will be zero for imports from partitions, which
12985 we want to treat as-if declared in this TU. */
12986 if (from->remap)
12987 {
12988 dep->cluster = index - from->entity_lwm;
12989 dep->section = from->remap;
12990 dep->set_flag_bit<DB_IMPORTED_BIT> ();
12991 }
12992 }
12993
12994 if (ek == EK_DECL
12995 && !dep->is_import ()
12996 && TREE_CODE (CP_DECL_CONTEXT (decl)) == NAMESPACE_DECL
12997 && !(TREE_CODE (decl) == TEMPLATE_DECL
12998 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl)))
12999 {
13000 tree ctx = CP_DECL_CONTEXT (decl);
13001
13002 if (!TREE_PUBLIC (ctx))
13003 /* Member of internal namespace. */
13004 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
13005 else if (VAR_OR_FUNCTION_DECL_P (not_tmpl)
13006 && DECL_THIS_STATIC (not_tmpl))
13007 {
13008 /* An internal decl. This is ok in a GM entity. */
13009 if (!(header_module_p ()
13010 || !DECL_LANG_SPECIFIC (not_tmpl)
13011 || !DECL_MODULE_PURVIEW_P (not_tmpl)))
13012 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
13013 }
13014 }
13015 }
13016
13017 if (!dep->is_import ())
13018 worklist.safe_push (dep);
13019 }
13020
13021 dump (dumper::DEPEND)
13022 && dump ("%s on %s %C:%N found",
13023 ek == EK_REDIRECT ? "Redirect"
13024 : for_binding ? "Binding" : "Dependency",
13025 dep->entity_kind_name (), TREE_CODE (decl), decl);
13026
13027 return dep;
13028 }
13029
13030 /* DEP is a newly discovered dependency. Append it to current's
13031 depset. */
13032
13033 void
13034 depset::hash::add_dependency (depset *dep)
13035 {
13036 gcc_checking_assert (current && !is_key_order ());
13037 current->deps.safe_push (dep);
13038
13039 if (dep->is_internal () && !current->is_internal ())
13040 current->set_flag_bit<DB_REFS_INTERNAL_BIT> ();
13041
13042 if (current->get_entity_kind () == EK_USING
13043 && DECL_IMPLICIT_TYPEDEF_P (dep->get_entity ())
13044 && TREE_CODE (TREE_TYPE (dep->get_entity ())) == ENUMERAL_TYPE)
13045 {
13046 /* CURRENT is an unwrapped using-decl and DECL is an enum's
13047 implicit typedef. Is CURRENT a member of the enum? */
13048 tree c_decl = OVL_FUNCTION (current->get_entity ());
13049
13050 if (TREE_CODE (c_decl) == CONST_DECL
13051 && (current->deps[0]->get_entity ()
13052 == CP_DECL_CONTEXT (dep->get_entity ())))
13053 /* Make DECL depend on CURRENT. */
13054 dep->deps.safe_push (current);
13055 }
13056
13057 if (dep->is_unreached ())
13058 {
13059 /* The dependency is reachable now. */
13060 reached_unreached = true;
13061 dep->clear_flag_bit<DB_UNREACHED_BIT> ();
13062 dump (dumper::DEPEND)
13063 && dump ("Reaching unreached %s %C:%N", dep->entity_kind_name (),
13064 TREE_CODE (dep->get_entity ()), dep->get_entity ());
13065 }
13066 }
13067
13068 depset *
13069 depset::hash::add_dependency (tree decl, entity_kind ek)
13070 {
13071 depset *dep;
13072
13073 if (is_key_order ())
13074 {
13075 dep = find_dependency (decl);
13076 if (dep)
13077 {
13078 current->deps.safe_push (dep);
13079 dump (dumper::MERGE)
13080 && dump ("Key dependency on %s %C:%N found",
13081 dep->entity_kind_name (), TREE_CODE (decl), decl);
13082 }
13083 else
13084 {
13085 /* It's not a mergeable decl, look for it in the original
13086 table. */
13087 dep = chain->find_dependency (decl);
13088 gcc_checking_assert (dep);
13089 }
13090 }
13091 else
13092 {
13093 dep = make_dependency (decl, ek);
13094 if (dep->get_entity_kind () != EK_REDIRECT)
13095 add_dependency (dep);
13096 }
13097
13098 return dep;
13099 }
13100
13101 void
13102 depset::hash::add_namespace_context (depset *dep, tree ns)
13103 {
13104 depset *ns_dep = make_dependency (ns, depset::EK_NAMESPACE);
13105 dep->deps.safe_push (ns_dep);
13106
13107 /* Mark it as special if imported so we don't walk connect when
13108 SCCing. */
13109 if (!dep->is_binding () && ns_dep->is_import ())
13110 dep->set_special ();
13111 }
13112
13113 struct add_binding_data
13114 {
13115 tree ns;
13116 bitmap partitions;
13117 depset *binding;
13118 depset::hash *hash;
13119 bool met_namespace;
13120 };
13121
13122 /* Return true if we are, or contain something that is exported. */
13123
13124 bool
13125 depset::hash::add_binding_entity (tree decl, WMB_Flags flags, void *data_)
13126 {
13127 auto data = static_cast <add_binding_data *> (data_);
13128
13129 if (!(TREE_CODE (decl) == NAMESPACE_DECL && !DECL_NAMESPACE_ALIAS (decl)))
13130 {
13131 tree inner = decl;
13132
13133 if (TREE_CODE (inner) == CONST_DECL
13134 && TREE_CODE (DECL_CONTEXT (inner)) == ENUMERAL_TYPE)
13135 inner = TYPE_NAME (DECL_CONTEXT (inner));
13136 else if (TREE_CODE (inner) == TEMPLATE_DECL)
13137 inner = DECL_TEMPLATE_RESULT (inner);
13138
13139 if ((!DECL_LANG_SPECIFIC (inner) || !DECL_MODULE_PURVIEW_P (inner))
13140 && !((flags & WMB_Using) && (flags & WMB_Export)))
13141 /* Ignore global module fragment entities unless explicitly
13142 exported with a using declaration. */
13143 return false;
13144
13145 if (VAR_OR_FUNCTION_DECL_P (inner)
13146 && DECL_THIS_STATIC (inner))
13147 {
13148 if (!header_module_p ())
13149 /* Ignore internal-linkage entitites. */
13150 return false;
13151 }
13152
13153 if ((TREE_CODE (decl) == VAR_DECL
13154 || TREE_CODE (decl) == TYPE_DECL)
13155 && DECL_TINFO_P (decl))
13156 /* Ignore TINFO things. */
13157 return false;
13158
13159 if (TREE_CODE (decl) == VAR_DECL && DECL_NTTP_OBJECT_P (decl))
13160 /* Ignore NTTP objects. */
13161 return false;
13162
13163 bool unscoped_enum_const_p = false;
13164 if (!(flags & WMB_Using) && CP_DECL_CONTEXT (decl) != data->ns)
13165 {
13166 /* A using that lost its wrapper or an unscoped enum
13167 constant. */
13168 /* FIXME: Ensure that unscoped enums are differentiated from
13169 'using enum' declarations when PR c++/114683 is fixed. */
13170 unscoped_enum_const_p = (TREE_CODE (decl) == CONST_DECL);
13171 flags = WMB_Flags (flags | WMB_Using);
13172 if (DECL_MODULE_EXPORT_P (TREE_CODE (decl) == CONST_DECL
13173 ? TYPE_NAME (TREE_TYPE (decl))
13174 : STRIP_TEMPLATE (decl)))
13175 flags = WMB_Flags (flags | WMB_Export);
13176 }
13177
13178 if (!data->binding)
13179 /* No binding to check. */;
13180 else if (flags & WMB_Using)
13181 {
13182 /* Look in the binding to see if we already have this
13183 using. */
13184 for (unsigned ix = data->binding->deps.length (); --ix;)
13185 {
13186 depset *d = data->binding->deps[ix];
13187 if (d->get_entity_kind () == EK_USING
13188 && OVL_FUNCTION (d->get_entity ()) == decl)
13189 {
13190 if (!(flags & WMB_Hidden))
13191 d->clear_hidden_binding ();
13192 if (flags & WMB_Export)
13193 OVL_EXPORT_P (d->get_entity ()) = true;
13194 return bool (flags & WMB_Export);
13195 }
13196 }
13197 }
13198 else if (flags & WMB_Dups)
13199 {
13200 /* Look in the binding to see if we already have this decl. */
13201 for (unsigned ix = data->binding->deps.length (); --ix;)
13202 {
13203 depset *d = data->binding->deps[ix];
13204 if (d->get_entity () == decl)
13205 {
13206 if (!(flags & WMB_Hidden))
13207 d->clear_hidden_binding ();
13208 return false;
13209 }
13210 }
13211 }
13212
13213 /* We're adding something. */
13214 if (!data->binding)
13215 {
13216 data->binding = make_binding (data->ns, DECL_NAME (decl));
13217 data->hash->add_namespace_context (data->binding, data->ns);
13218
13219 depset **slot = data->hash->binding_slot (data->ns,
13220 DECL_NAME (decl), true);
13221 gcc_checking_assert (!*slot);
13222 *slot = data->binding;
13223 }
13224
13225 /* Make sure nobody left a tree visited lying about. */
13226 gcc_checking_assert (!TREE_VISITED (decl));
13227
13228 if (flags & WMB_Using)
13229 {
13230 decl = ovl_make (decl, NULL_TREE);
13231 if (!unscoped_enum_const_p)
13232 OVL_USING_P (decl) = true;
13233 if (flags & WMB_Export)
13234 OVL_EXPORT_P (decl) = true;
13235 }
13236
13237 depset *dep = data->hash->make_dependency
13238 (decl, flags & WMB_Using ? EK_USING : EK_FOR_BINDING);
13239 if (flags & WMB_Hidden)
13240 dep->set_hidden_binding ();
13241 data->binding->deps.safe_push (dep);
13242 /* Binding and contents are mutually dependent. */
13243 dep->deps.safe_push (data->binding);
13244
13245 return (flags & WMB_Using
13246 ? flags & WMB_Export : DECL_MODULE_EXPORT_P (decl));
13247 }
13248 else if (DECL_NAME (decl) && !data->met_namespace)
13249 {
13250 /* Namespace, walk exactly once. */
13251 gcc_checking_assert (TREE_PUBLIC (decl));
13252 data->met_namespace = true;
13253 if (data->hash->add_namespace_entities (decl, data->partitions))
13254 {
13255 /* It contains an exported thing, so it is exported. */
13256 gcc_checking_assert (DECL_MODULE_PURVIEW_P (decl));
13257 DECL_MODULE_EXPORT_P (decl) = true;
13258 }
13259
13260 if (DECL_MODULE_PURVIEW_P (decl))
13261 {
13262 data->hash->make_dependency (decl, depset::EK_NAMESPACE);
13263
13264 return DECL_MODULE_EXPORT_P (decl);
13265 }
13266 }
13267
13268 return false;
13269 }
13270
13271 /* Recursively find all the namespace bindings of NS. Add a depset
13272 for every binding that contains an export or module-linkage entity.
13273 Add a defining depset for every such decl that we need to write a
13274 definition. Such defining depsets depend on the binding depset.
13275 Returns true if we contain something exported. */
13276
13277 bool
13278 depset::hash::add_namespace_entities (tree ns, bitmap partitions)
13279 {
13280 dump () && dump ("Looking for writables in %N", ns);
13281 dump.indent ();
13282
13283 unsigned count = 0;
13284 add_binding_data data;
13285 data.ns = ns;
13286 data.partitions = partitions;
13287 data.hash = this;
13288
13289 hash_table<named_decl_hash>::iterator end
13290 (DECL_NAMESPACE_BINDINGS (ns)->end ());
13291 for (hash_table<named_decl_hash>::iterator iter
13292 (DECL_NAMESPACE_BINDINGS (ns)->begin ()); iter != end; ++iter)
13293 {
13294 data.binding = nullptr;
13295 data.met_namespace = false;
13296 if (walk_module_binding (*iter, partitions, add_binding_entity, &data))
13297 count++;
13298 }
13299
13300 if (count)
13301 dump () && dump ("Found %u entries", count);
13302 dump.outdent ();
13303
13304 return count != 0;
13305 }
13306
13307 void
13308 depset::hash::add_partial_entities (vec<tree, va_gc> *partial_classes)
13309 {
13310 for (unsigned ix = 0; ix != partial_classes->length (); ix++)
13311 {
13312 tree inner = (*partial_classes)[ix];
13313
13314 depset *dep = make_dependency (inner, depset::EK_DECL);
13315
13316 if (dep->get_entity_kind () == depset::EK_REDIRECT)
13317 {
13318 dep = dep->deps[0];
13319 /* We should have recorded the template as a partial
13320 specialization. */
13321 gcc_checking_assert (dep->get_entity_kind ()
13322 == depset::EK_PARTIAL);
13323 }
13324 else
13325 /* It was an explicit specialization, not a partial one. */
13326 gcc_checking_assert (dep->get_entity_kind ()
13327 == depset::EK_SPECIALIZATION);
13328
13329 /* Only emit GM entities if reached. */
13330 if (!DECL_LANG_SPECIFIC (inner)
13331 || !DECL_MODULE_PURVIEW_P (inner))
13332 dep->set_flag_bit<DB_UNREACHED_BIT> ();
13333 }
13334 }
13335
13336 /* Add the members of imported classes that we defined in this TU.
13337 This will also include lazily created implicit member function
13338 declarations. (All others will be definitions.) */
13339
13340 void
13341 depset::hash::add_class_entities (vec<tree, va_gc> *class_members)
13342 {
13343 for (unsigned ix = 0; ix != class_members->length (); ix++)
13344 {
13345 tree defn = (*class_members)[ix];
13346 depset *dep = make_dependency (defn, EK_INNER_DECL);
13347
13348 if (dep->get_entity_kind () == EK_REDIRECT)
13349 dep = dep->deps[0];
13350
13351 /* Only non-instantiations need marking as members. */
13352 if (dep->get_entity_kind () == EK_DECL)
13353 dep->set_flag_bit <DB_IS_MEMBER_BIT> ();
13354 }
13355 }
13356
13357 /* We add the partial & explicit specializations, and the explicit
13358 instantiations. */
13359
13360 static void
13361 specialization_add (bool decl_p, spec_entry *entry, void *data_)
13362 {
13363 vec<spec_entry *> *data = reinterpret_cast <vec<spec_entry *> *> (data_);
13364
13365 if (!decl_p)
13366 {
13367 /* We exclusively use decls to locate things. Make sure there's
13368 no mismatch between the two specialization tables we keep.
13369 pt.cc optimizes instantiation lookup using a complicated
13370 heuristic. We don't attempt to replicate that algorithm, but
13371 observe its behaviour and reproduce it upon read back. */
13372
13373 gcc_checking_assert (TREE_CODE (entry->spec) == ENUMERAL_TYPE
13374 || DECL_CLASS_TEMPLATE_P (entry->tmpl));
13375
13376 gcc_checking_assert (!match_mergeable_specialization (true, entry));
13377 }
13378 else if (VAR_OR_FUNCTION_DECL_P (entry->spec))
13379 gcc_checking_assert (!DECL_LOCAL_DECL_P (entry->spec));
13380
13381 data->safe_push (entry);
13382 }
13383
13384 /* Arbitrary stable comparison. */
13385
13386 static int
13387 specialization_cmp (const void *a_, const void *b_)
13388 {
13389 const spec_entry *ea = *reinterpret_cast<const spec_entry *const *> (a_);
13390 const spec_entry *eb = *reinterpret_cast<const spec_entry *const *> (b_);
13391
13392 if (ea == eb)
13393 return 0;
13394
13395 tree a = ea->spec;
13396 tree b = eb->spec;
13397 if (TYPE_P (a))
13398 {
13399 a = TYPE_NAME (a);
13400 b = TYPE_NAME (b);
13401 }
13402
13403 if (a == b)
13404 /* This can happen with friend specializations. Just order by
13405 entry address. See note in depset_cmp. */
13406 return ea < eb ? -1 : +1;
13407
13408 return DECL_UID (a) < DECL_UID (b) ? -1 : +1;
13409 }
13410
13411 /* We add all kinds of specialializations. Implicit specializations
13412 should only streamed and walked if they are reachable from
13413 elsewhere. Hence the UNREACHED flag. This is making the
13414 assumption that it is cheaper to reinstantiate them on demand
13415 elsewhere, rather than stream them in when we instantiate their
13416 general template. Also, if we do stream them, we can only do that
13417 if they are not internal (which they can become if they themselves
13418 touch an internal entity?). */
13419
13420 void
13421 depset::hash::add_specializations (bool decl_p)
13422 {
13423 vec<spec_entry *> data;
13424 data.create (100);
13425 walk_specializations (decl_p, specialization_add, &data);
13426 data.qsort (specialization_cmp);
13427 while (data.length ())
13428 {
13429 spec_entry *entry = data.pop ();
13430 tree spec = entry->spec;
13431 int use_tpl = 0;
13432 bool is_friend = false;
13433
13434 if (decl_p && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (entry->tmpl))
13435 /* A friend of a template. This is keyed to the
13436 instantiation. */
13437 is_friend = true;
13438
13439 if (decl_p)
13440 {
13441 if (tree ti = DECL_TEMPLATE_INFO (spec))
13442 {
13443 tree tmpl = TI_TEMPLATE (ti);
13444
13445 use_tpl = DECL_USE_TEMPLATE (spec);
13446 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13447 {
13448 spec = tmpl;
13449 gcc_checking_assert (DECL_USE_TEMPLATE (spec) == use_tpl);
13450 }
13451 else if (is_friend)
13452 {
13453 if (TI_TEMPLATE (ti) != entry->tmpl
13454 || !template_args_equal (TI_ARGS (ti), entry->tmpl))
13455 goto template_friend;
13456 }
13457 }
13458 else
13459 {
13460 template_friend:;
13461 gcc_checking_assert (is_friend);
13462 /* This is a friend of a template class, but not the one
13463 that generated entry->spec itself (i.e. it's an
13464 equivalent clone). We do not need to record
13465 this. */
13466 continue;
13467 }
13468 }
13469 else
13470 {
13471 if (TREE_CODE (spec) == ENUMERAL_TYPE)
13472 {
13473 tree ctx = DECL_CONTEXT (TYPE_NAME (spec));
13474
13475 if (TYPE_P (ctx))
13476 use_tpl = CLASSTYPE_USE_TEMPLATE (ctx);
13477 else
13478 use_tpl = DECL_USE_TEMPLATE (ctx);
13479 }
13480 else
13481 use_tpl = CLASSTYPE_USE_TEMPLATE (spec);
13482
13483 tree ti = TYPE_TEMPLATE_INFO (spec);
13484 tree tmpl = TI_TEMPLATE (ti);
13485
13486 spec = TYPE_NAME (spec);
13487 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13488 {
13489 spec = tmpl;
13490 use_tpl = DECL_USE_TEMPLATE (spec);
13491 }
13492 }
13493
13494 bool needs_reaching = false;
13495 if (use_tpl == 1)
13496 /* Implicit instantiations only walked if we reach them. */
13497 needs_reaching = true;
13498 else if (!DECL_LANG_SPECIFIC (STRIP_TEMPLATE (spec))
13499 || !DECL_MODULE_PURVIEW_P (STRIP_TEMPLATE (spec)))
13500 /* Likewise, GMF explicit or partial specializations. */
13501 needs_reaching = true;
13502
13503 #if false && CHECKING_P
13504 /* The instantiation isn't always on
13505 DECL_TEMPLATE_INSTANTIATIONS, */
13506 // FIXME: we probably need to remember this information?
13507 /* Verify the specialization is on the
13508 DECL_TEMPLATE_INSTANTIATIONS of the template. */
13509 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (entry->tmpl);
13510 cons; cons = TREE_CHAIN (cons))
13511 if (TREE_VALUE (cons) == entry->spec)
13512 {
13513 gcc_assert (entry->args == TREE_PURPOSE (cons));
13514 goto have_spec;
13515 }
13516 gcc_unreachable ();
13517 have_spec:;
13518 #endif
13519
13520 /* Make sure nobody left a tree visited lying about. */
13521 gcc_checking_assert (!TREE_VISITED (spec));
13522 depset *dep = make_dependency (spec, depset::EK_SPECIALIZATION);
13523 if (dep->is_special ())
13524 gcc_unreachable ();
13525 else
13526 {
13527 if (dep->get_entity_kind () == depset::EK_REDIRECT)
13528 dep = dep->deps[0];
13529 else if (dep->get_entity_kind () == depset::EK_SPECIALIZATION)
13530 {
13531 dep->set_special ();
13532 dep->deps.safe_push (reinterpret_cast<depset *> (entry));
13533 if (!decl_p)
13534 dep->set_flag_bit<DB_TYPE_SPEC_BIT> ();
13535 }
13536
13537 if (needs_reaching)
13538 dep->set_flag_bit<DB_UNREACHED_BIT> ();
13539 if (is_friend)
13540 dep->set_flag_bit<DB_FRIEND_SPEC_BIT> ();
13541 }
13542 }
13543 data.release ();
13544 }
13545
13546 /* Add a depset into the mergeable hash. */
13547
13548 void
13549 depset::hash::add_mergeable (depset *mergeable)
13550 {
13551 gcc_checking_assert (is_key_order ());
13552 entity_kind ek = mergeable->get_entity_kind ();
13553 tree decl = mergeable->get_entity ();
13554 gcc_checking_assert (ek < EK_DIRECT_HWM);
13555
13556 depset **slot = entity_slot (decl, true);
13557 gcc_checking_assert (!*slot);
13558 depset *dep = make_entity (decl, ek);
13559 *slot = dep;
13560
13561 worklist.safe_push (dep);
13562
13563 /* So we can locate the mergeable depset this depset refers to,
13564 mark the first dep. */
13565 dep->set_special ();
13566 dep->deps.safe_push (mergeable);
13567 }
13568
13569 /* Find the innermost-namespace scope of DECL, and that
13570 namespace-scope decl. */
13571
13572 tree
13573 find_pending_key (tree decl, tree *decl_p = nullptr)
13574 {
13575 tree ns = decl;
13576 do
13577 {
13578 decl = ns;
13579 ns = CP_DECL_CONTEXT (ns);
13580 if (TYPE_P (ns))
13581 ns = TYPE_NAME (ns);
13582 }
13583 while (TREE_CODE (ns) != NAMESPACE_DECL);
13584
13585 if (decl_p)
13586 *decl_p = decl;
13587
13588 return ns;
13589 }
13590
13591 /* Iteratively find dependencies. During the walk we may find more
13592 entries on the same binding that need walking. */
13593
13594 void
13595 depset::hash::find_dependencies (module_state *module)
13596 {
13597 trees_out walker (NULL, module, *this);
13598 vec<depset *> unreached;
13599 unreached.create (worklist.length ());
13600
13601 for (;;)
13602 {
13603 reached_unreached = false;
13604 while (worklist.length ())
13605 {
13606 depset *item = worklist.pop ();
13607
13608 gcc_checking_assert (!item->is_binding ());
13609 if (item->is_unreached ())
13610 unreached.quick_push (item);
13611 else
13612 {
13613 current = item;
13614 tree decl = current->get_entity ();
13615 dump (is_key_order () ? dumper::MERGE : dumper::DEPEND)
13616 && dump ("Dependencies of %s %C:%N",
13617 is_key_order () ? "key-order"
13618 : current->entity_kind_name (), TREE_CODE (decl), decl);
13619 dump.indent ();
13620 walker.begin ();
13621 if (current->get_entity_kind () == EK_USING)
13622 walker.tree_node (OVL_FUNCTION (decl));
13623 else if (TREE_VISITED (decl))
13624 /* A global tree. */;
13625 else if (item->get_entity_kind () == EK_NAMESPACE)
13626 {
13627 module->note_location (DECL_SOURCE_LOCATION (decl));
13628 add_namespace_context (current, CP_DECL_CONTEXT (decl));
13629 }
13630 else
13631 {
13632 walker.mark_declaration (decl, current->has_defn ());
13633
13634 if (!walker.is_key_order ()
13635 && (item->get_entity_kind () == EK_SPECIALIZATION
13636 || item->get_entity_kind () == EK_PARTIAL
13637 || (item->get_entity_kind () == EK_DECL
13638 && item->is_member ())))
13639 {
13640 tree ns = find_pending_key (decl, nullptr);
13641 add_namespace_context (item, ns);
13642 }
13643
13644 walker.decl_value (decl, current);
13645 if (current->has_defn ())
13646 walker.write_definition (decl);
13647 }
13648 walker.end ();
13649
13650 if (!walker.is_key_order ()
13651 && TREE_CODE (decl) == TEMPLATE_DECL
13652 && !DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
13653 {
13654 /* Mark all the explicit & partial specializations as
13655 reachable. We search both specialization lists as some
13656 constrained partial specializations for class types are
13657 only found in DECL_TEMPLATE_SPECIALIZATIONS. */
13658 auto mark_reached = [this](tree spec)
13659 {
13660 if (TYPE_P (spec))
13661 spec = TYPE_NAME (spec);
13662 int use_tpl;
13663 node_template_info (spec, use_tpl);
13664 if (use_tpl & 2)
13665 {
13666 depset *spec_dep = find_dependency (spec);
13667 if (spec_dep->get_entity_kind () == EK_REDIRECT)
13668 spec_dep = spec_dep->deps[0];
13669 if (spec_dep->is_unreached ())
13670 {
13671 reached_unreached = true;
13672 spec_dep->clear_flag_bit<DB_UNREACHED_BIT> ();
13673 dump (dumper::DEPEND)
13674 && dump ("Reaching unreached specialization"
13675 " %C:%N", TREE_CODE (spec), spec);
13676 }
13677 }
13678 };
13679
13680 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (decl);
13681 cons; cons = TREE_CHAIN (cons))
13682 mark_reached (TREE_VALUE (cons));
13683 for (tree cons = DECL_TEMPLATE_SPECIALIZATIONS (decl);
13684 cons; cons = TREE_CHAIN (cons))
13685 mark_reached (TREE_VALUE (cons));
13686 }
13687
13688 dump.outdent ();
13689 current = NULL;
13690 }
13691 }
13692
13693 if (!reached_unreached)
13694 break;
13695
13696 /* It's possible the we reached the unreached before we
13697 processed it in the above loop, so we'll be doing this an
13698 extra time. However, to avoid that we have to do some
13699 bit shuffling that also involves a scan of the list.
13700 Swings & roundabouts I guess. */
13701 std::swap (worklist, unreached);
13702 }
13703
13704 unreached.release ();
13705 }
13706
13707 /* Compare two entries of a single binding. TYPE_DECL before
13708 non-exported before exported. */
13709
13710 static int
13711 binding_cmp (const void *a_, const void *b_)
13712 {
13713 depset *a = *(depset *const *)a_;
13714 depset *b = *(depset *const *)b_;
13715
13716 tree a_ent = a->get_entity ();
13717 tree b_ent = b->get_entity ();
13718 gcc_checking_assert (a_ent != b_ent
13719 && !a->is_binding ()
13720 && !b->is_binding ());
13721
13722 /* Implicit typedefs come first. */
13723 bool a_implicit = DECL_IMPLICIT_TYPEDEF_P (a_ent);
13724 bool b_implicit = DECL_IMPLICIT_TYPEDEF_P (b_ent);
13725 if (a_implicit || b_implicit)
13726 {
13727 /* A binding with two implicit type decls? That's unpossible! */
13728 gcc_checking_assert (!(a_implicit && b_implicit));
13729 return a_implicit ? -1 : +1; /* Implicit first. */
13730 }
13731
13732 /* Hidden before non-hidden. */
13733 bool a_hidden = a->is_hidden ();
13734 bool b_hidden = b->is_hidden ();
13735 if (a_hidden != b_hidden)
13736 return a_hidden ? -1 : +1;
13737
13738 bool a_using = a->get_entity_kind () == depset::EK_USING;
13739 bool a_export;
13740 if (a_using)
13741 {
13742 a_export = OVL_EXPORT_P (a_ent);
13743 a_ent = OVL_FUNCTION (a_ent);
13744 }
13745 else
13746 a_export = DECL_MODULE_EXPORT_P (TREE_CODE (a_ent) == CONST_DECL
13747 ? TYPE_NAME (TREE_TYPE (a_ent))
13748 : STRIP_TEMPLATE (a_ent));
13749
13750 bool b_using = b->get_entity_kind () == depset::EK_USING;
13751 bool b_export;
13752 if (b_using)
13753 {
13754 b_export = OVL_EXPORT_P (b_ent);
13755 b_ent = OVL_FUNCTION (b_ent);
13756 }
13757 else
13758 b_export = DECL_MODULE_EXPORT_P (TREE_CODE (b_ent) == CONST_DECL
13759 ? TYPE_NAME (TREE_TYPE (b_ent))
13760 : STRIP_TEMPLATE (b_ent));
13761
13762 /* Non-exports before exports. */
13763 if (a_export != b_export)
13764 return a_export ? +1 : -1;
13765
13766 /* At this point we don't care, but want a stable sort. */
13767
13768 if (a_using != b_using)
13769 /* using first. */
13770 return a_using? -1 : +1;
13771
13772 return DECL_UID (a_ent) < DECL_UID (b_ent) ? -1 : +1;
13773 }
13774
13775 /* Sort the bindings, issue errors about bad internal refs. */
13776
13777 bool
13778 depset::hash::finalize_dependencies ()
13779 {
13780 bool ok = true;
13781 depset::hash::iterator end (this->end ());
13782 for (depset::hash::iterator iter (begin ()); iter != end; ++iter)
13783 {
13784 depset *dep = *iter;
13785 if (dep->is_binding ())
13786 {
13787 /* Keep the containing namespace dep first. */
13788 gcc_checking_assert (dep->deps.length () > 1
13789 && (dep->deps[0]->get_entity_kind ()
13790 == EK_NAMESPACE)
13791 && (dep->deps[0]->get_entity ()
13792 == dep->get_entity ()));
13793 if (dep->deps.length () > 2)
13794 gcc_qsort (&dep->deps[1], dep->deps.length () - 1,
13795 sizeof (dep->deps[1]), binding_cmp);
13796 }
13797 else if (dep->refs_internal ())
13798 {
13799 for (unsigned ix = dep->deps.length (); ix--;)
13800 {
13801 depset *rdep = dep->deps[ix];
13802 if (rdep->is_internal ())
13803 {
13804 // FIXME:QOI Better location information? We're
13805 // losing, so it doesn't matter about efficiency
13806 tree decl = dep->get_entity ();
13807 error_at (DECL_SOURCE_LOCATION (decl),
13808 "%q#D references internal linkage entity %q#D",
13809 decl, rdep->get_entity ());
13810 break;
13811 }
13812 }
13813 ok = false;
13814 }
13815 }
13816
13817 return ok;
13818 }
13819
13820 /* Core of TARJAN's algorithm to find Strongly Connected Components
13821 within a graph. See https://en.wikipedia.org/wiki/
13822 Tarjan%27s_strongly_connected_components_algorithm for details.
13823
13824 We use depset::section as lowlink. Completed nodes have
13825 depset::cluster containing the cluster number, with the top
13826 bit set.
13827
13828 A useful property is that the output vector is a reverse
13829 topological sort of the resulting DAG. In our case that means
13830 dependent SCCs are found before their dependers. We make use of
13831 that property. */
13832
13833 void
13834 depset::tarjan::connect (depset *v)
13835 {
13836 gcc_checking_assert (v->is_binding ()
13837 || !(v->is_unreached () || v->is_import ()));
13838
13839 v->cluster = v->section = ++index;
13840 stack.safe_push (v);
13841
13842 /* Walk all our dependencies, ignore a first marked slot */
13843 for (unsigned ix = v->is_special (); ix != v->deps.length (); ix++)
13844 {
13845 depset *dep = v->deps[ix];
13846
13847 if (dep->is_binding () || !dep->is_import ())
13848 {
13849 unsigned lwm = dep->cluster;
13850
13851 if (!dep->cluster)
13852 {
13853 /* A new node. Connect it. */
13854 connect (dep);
13855 lwm = dep->section;
13856 }
13857
13858 if (dep->section && v->section > lwm)
13859 v->section = lwm;
13860 }
13861 }
13862
13863 if (v->section == v->cluster)
13864 {
13865 /* Root of a new SCC. Push all the members onto the result list. */
13866 unsigned num = v->cluster;
13867 depset *p;
13868 do
13869 {
13870 p = stack.pop ();
13871 p->cluster = num;
13872 p->section = 0;
13873 result.quick_push (p);
13874 }
13875 while (p != v);
13876 }
13877 }
13878
13879 /* Compare two depsets. The specific ordering is unimportant, we're
13880 just trying to get consistency. */
13881
13882 static int
13883 depset_cmp (const void *a_, const void *b_)
13884 {
13885 depset *a = *(depset *const *)a_;
13886 depset *b = *(depset *const *)b_;
13887
13888 depset::entity_kind a_kind = a->get_entity_kind ();
13889 depset::entity_kind b_kind = b->get_entity_kind ();
13890
13891 if (a_kind != b_kind)
13892 /* Different entity kinds, order by that. */
13893 return a_kind < b_kind ? -1 : +1;
13894
13895 tree a_decl = a->get_entity ();
13896 tree b_decl = b->get_entity ();
13897 if (a_kind == depset::EK_USING)
13898 {
13899 /* If one is a using, the other must be too. */
13900 a_decl = OVL_FUNCTION (a_decl);
13901 b_decl = OVL_FUNCTION (b_decl);
13902 }
13903
13904 if (a_decl != b_decl)
13905 /* Different entities, order by their UID. */
13906 return DECL_UID (a_decl) < DECL_UID (b_decl) ? -1 : +1;
13907
13908 if (a_kind == depset::EK_BINDING)
13909 {
13910 /* Both are bindings. Order by identifier hash. */
13911 gcc_checking_assert (a->get_name () != b->get_name ());
13912 hashval_t ah = IDENTIFIER_HASH_VALUE (a->get_name ());
13913 hashval_t bh = IDENTIFIER_HASH_VALUE (b->get_name ());
13914 return (ah == bh ? 0 : ah < bh ? -1 : +1);
13915 }
13916
13917 /* They are the same decl. This can happen with two using decls
13918 pointing to the same target. The best we can aim for is
13919 consistently telling qsort how to order them. Hopefully we'll
13920 never have to debug a case that depends on this. Oh, who am I
13921 kidding? Good luck. */
13922 gcc_checking_assert (a_kind == depset::EK_USING);
13923
13924 /* Order by depset address. Not the best, but it is something. */
13925 return a < b ? -1 : +1;
13926 }
13927
13928 /* Sort the clusters in SCC such that those that depend on one another
13929 are placed later. */
13930
13931 // FIXME: I am not convinced this is needed and, if needed,
13932 // sufficient. We emit the decls in this order but that emission
13933 // could walk into later decls (from the body of the decl, or default
13934 // arg-like things). Why doesn't that walk do the right thing? And
13935 // if it DTRT why do we need to sort here -- won't things naturally
13936 // work? I think part of the issue is that when we're going to refer
13937 // to an entity by name, and that entity is in the same cluster as us,
13938 // we need to actually walk that entity, if we've not already walked
13939 // it.
13940 static void
13941 sort_cluster (depset::hash *original, depset *scc[], unsigned size)
13942 {
13943 depset::hash table (size, original);
13944
13945 dump.indent ();
13946
13947 /* Place bindings last, usings before that. It's not strictly
13948 necessary, but it does make things neater. Says Mr OCD. */
13949 unsigned bind_lwm = size;
13950 unsigned use_lwm = size;
13951 for (unsigned ix = 0; ix != use_lwm;)
13952 {
13953 depset *dep = scc[ix];
13954 switch (dep->get_entity_kind ())
13955 {
13956 case depset::EK_BINDING:
13957 /* Move to end. No increment. Notice this could be moving
13958 a using decl, which we'll then move again. */
13959 if (--bind_lwm != ix)
13960 {
13961 scc[ix] = scc[bind_lwm];
13962 scc[bind_lwm] = dep;
13963 }
13964 if (use_lwm > bind_lwm)
13965 {
13966 use_lwm--;
13967 break;
13968 }
13969 /* We must have copied a using, so move it too. */
13970 dep = scc[ix];
13971 gcc_checking_assert (dep->get_entity_kind () == depset::EK_USING);
13972 /* FALLTHROUGH */
13973
13974 case depset::EK_USING:
13975 if (--use_lwm != ix)
13976 {
13977 scc[ix] = scc[use_lwm];
13978 scc[use_lwm] = dep;
13979 }
13980 break;
13981
13982 case depset::EK_DECL:
13983 case depset::EK_SPECIALIZATION:
13984 case depset::EK_PARTIAL:
13985 table.add_mergeable (dep);
13986 ix++;
13987 break;
13988
13989 default:
13990 gcc_unreachable ();
13991 }
13992 }
13993
13994 gcc_checking_assert (use_lwm <= bind_lwm);
13995 dump (dumper::MERGE) && dump ("Ordering %u/%u depsets", use_lwm, size);
13996
13997 table.find_dependencies (nullptr);
13998
13999 vec<depset *> order = table.connect ();
14000 gcc_checking_assert (order.length () == use_lwm);
14001
14002 /* Now rewrite entries [0,lwm), in the dependency order we
14003 discovered. Usually each entity is in its own cluster. Rarely,
14004 we can get multi-entity clusters, in which case all but one must
14005 only be reached from within the cluster. This happens for
14006 something like:
14007
14008 template<typename T>
14009 auto Foo (const T &arg) -> TPL<decltype (arg)>;
14010
14011 The instantiation of TPL will be in the specialization table, and
14012 refer to Foo via arg. But we can only get to that specialization
14013 from Foo's declaration, so we only need to treat Foo as mergable
14014 (We'll do structural comparison of TPL<decltype (arg)>).
14015
14016 Finding the single cluster entry dep is very tricky and
14017 expensive. Let's just not do that. It's harmless in this case
14018 anyway. */
14019 unsigned pos = 0;
14020 unsigned cluster = ~0u;
14021 for (unsigned ix = 0; ix != order.length (); ix++)
14022 {
14023 gcc_checking_assert (order[ix]->is_special ());
14024 depset *dep = order[ix]->deps[0];
14025 scc[pos++] = dep;
14026 dump (dumper::MERGE)
14027 && dump ("Mergeable %u is %N%s", ix, dep->get_entity (),
14028 order[ix]->cluster == cluster ? " (tight)" : "");
14029 cluster = order[ix]->cluster;
14030 }
14031
14032 gcc_checking_assert (pos == use_lwm);
14033
14034 order.release ();
14035 dump (dumper::MERGE) && dump ("Ordered %u keys", pos);
14036 dump.outdent ();
14037 }
14038
14039 /* Reduce graph to SCCS clusters. SCCS will be populated with the
14040 depsets in dependency order. Each depset's CLUSTER field contains
14041 its cluster number. Each SCC has a unique cluster number, and are
14042 contiguous in SCCS. Cluster numbers are otherwise arbitrary. */
14043
14044 vec<depset *>
14045 depset::hash::connect ()
14046 {
14047 tarjan connector (size ());
14048 vec<depset *> deps;
14049 deps.create (size ());
14050 iterator end (this->end ());
14051 for (iterator iter (begin ()); iter != end; ++iter)
14052 {
14053 depset *item = *iter;
14054
14055 entity_kind kind = item->get_entity_kind ();
14056 if (kind == EK_BINDING
14057 || !(kind == EK_REDIRECT
14058 || item->is_unreached ()
14059 || item->is_import ()))
14060 deps.quick_push (item);
14061 }
14062
14063 /* Iteration over the hash table is an unspecified ordering. While
14064 that has advantages, it causes 2 problems. Firstly repeatable
14065 builds are tricky. Secondly creating testcases that check
14066 dependencies are correct by making sure a bad ordering would
14067 happen if that was wrong. */
14068 deps.qsort (depset_cmp);
14069
14070 while (deps.length ())
14071 {
14072 depset *v = deps.pop ();
14073 dump (dumper::CLUSTER) &&
14074 (v->is_binding ()
14075 ? dump ("Connecting binding %P", v->get_entity (), v->get_name ())
14076 : dump ("Connecting %s %s %C:%N",
14077 is_key_order () ? "key-order"
14078 : !v->has_defn () ? "declaration" : "definition",
14079 v->entity_kind_name (), TREE_CODE (v->get_entity ()),
14080 v->get_entity ()));
14081 if (!v->cluster)
14082 connector.connect (v);
14083 }
14084
14085 deps.release ();
14086 return connector.result;
14087 }
14088
14089 /* Initialize location spans. */
14090
14091 void
14092 loc_spans::init (const line_maps *lmaps, const line_map_ordinary *map)
14093 {
14094 gcc_checking_assert (!init_p ());
14095 spans = new vec<span> ();
14096 spans->reserve (20);
14097
14098 span interval;
14099 interval.ordinary.first = 0;
14100 interval.macro.second = MAX_LOCATION_T + 1;
14101 interval.ordinary_delta = interval.macro_delta = 0;
14102
14103 /* A span for reserved fixed locs. */
14104 interval.ordinary.second
14105 = MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0));
14106 interval.macro.first = interval.macro.second;
14107 dump (dumper::LOCATION)
14108 && dump ("Fixed span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
14109 interval.ordinary.first, interval.ordinary.second,
14110 interval.macro.first, interval.macro.second);
14111 spans->quick_push (interval);
14112
14113 /* A span for command line & forced headers. */
14114 interval.ordinary.first = interval.ordinary.second;
14115 interval.macro.second = interval.macro.first;
14116 if (map)
14117 {
14118 interval.ordinary.second = map->start_location;
14119 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (lmaps);
14120 }
14121 dump (dumper::LOCATION)
14122 && dump ("Pre span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
14123 interval.ordinary.first, interval.ordinary.second,
14124 interval.macro.first, interval.macro.second);
14125 spans->quick_push (interval);
14126
14127 /* Start an interval for the main file. */
14128 interval.ordinary.first = interval.ordinary.second;
14129 interval.macro.second = interval.macro.first;
14130 dump (dumper::LOCATION)
14131 && dump ("Main span %u ordinary:[%u,*) macro:[*,%u)", spans->length (),
14132 interval.ordinary.first, interval.macro.second);
14133 spans->quick_push (interval);
14134 }
14135
14136 /* Reopen the span, if we want the about-to-be-inserted set of maps to
14137 be propagated in our own location table. I.e. we are the primary
14138 interface and we're importing a partition. */
14139
14140 bool
14141 loc_spans::maybe_propagate (module_state *import, location_t hwm)
14142 {
14143 bool opened = (module_interface_p () && !module_partition_p ()
14144 && import->is_partition ());
14145 if (opened)
14146 open (hwm);
14147 return opened;
14148 }
14149
14150 /* Open a new linemap interval. The just-created ordinary map is the
14151 first map of the interval. */
14152
14153 void
14154 loc_spans::open (location_t hwm)
14155 {
14156 span interval;
14157 interval.ordinary.first = interval.ordinary.second = hwm;
14158 interval.macro.first = interval.macro.second
14159 = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
14160 interval.ordinary_delta = interval.macro_delta = 0;
14161 dump (dumper::LOCATION)
14162 && dump ("Opening span %u ordinary:[%u,... macro:...,%u)",
14163 spans->length (), interval.ordinary.first,
14164 interval.macro.second);
14165 if (spans->length ())
14166 {
14167 /* No overlapping! */
14168 auto &last = spans->last ();
14169 gcc_checking_assert (interval.ordinary.first >= last.ordinary.second);
14170 gcc_checking_assert (interval.macro.second <= last.macro.first);
14171 }
14172 spans->safe_push (interval);
14173 }
14174
14175 /* Close out the current linemap interval. The last maps are within
14176 the interval. */
14177
14178 void
14179 loc_spans::close ()
14180 {
14181 span &interval = spans->last ();
14182
14183 interval.ordinary.second
14184 = ((line_table->highest_location + (1 << line_table->default_range_bits))
14185 & ~((1u << line_table->default_range_bits) - 1));
14186 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
14187 dump (dumper::LOCATION)
14188 && dump ("Closing span %u ordinary:[%u,%u) macro:[%u,%u)",
14189 spans->length () - 1,
14190 interval.ordinary.first,interval.ordinary.second,
14191 interval.macro.first, interval.macro.second);
14192 }
14193
14194 /* Given an ordinary location LOC, return the lmap_interval it resides
14195 in. NULL if it is not in an interval. */
14196
14197 const loc_spans::span *
14198 loc_spans::ordinary (location_t loc)
14199 {
14200 unsigned len = spans->length ();
14201 unsigned pos = 0;
14202 while (len)
14203 {
14204 unsigned half = len / 2;
14205 const span &probe = (*spans)[pos + half];
14206 if (loc < probe.ordinary.first)
14207 len = half;
14208 else if (loc < probe.ordinary.second)
14209 return &probe;
14210 else
14211 {
14212 pos += half + 1;
14213 len = len - (half + 1);
14214 }
14215 }
14216 return NULL;
14217 }
14218
14219 /* Likewise, given a macro location LOC, return the lmap interval it
14220 resides in. */
14221
14222 const loc_spans::span *
14223 loc_spans::macro (location_t loc)
14224 {
14225 unsigned len = spans->length ();
14226 unsigned pos = 0;
14227 while (len)
14228 {
14229 unsigned half = len / 2;
14230 const span &probe = (*spans)[pos + half];
14231 if (loc >= probe.macro.second)
14232 len = half;
14233 else if (loc >= probe.macro.first)
14234 return &probe;
14235 else
14236 {
14237 pos += half + 1;
14238 len = len - (half + 1);
14239 }
14240 }
14241 return NULL;
14242 }
14243
14244 /* Return the ordinary location closest to FROM. */
14245
14246 static location_t
14247 ordinary_loc_of (line_maps *lmaps, location_t from)
14248 {
14249 while (!IS_ORDINARY_LOC (from))
14250 {
14251 if (IS_ADHOC_LOC (from))
14252 from = get_location_from_adhoc_loc (lmaps, from);
14253 if (from >= LINEMAPS_MACRO_LOWEST_LOCATION (lmaps))
14254 {
14255 /* Find the ordinary location nearest FROM. */
14256 const line_map *map = linemap_lookup (lmaps, from);
14257 const line_map_macro *mac_map = linemap_check_macro (map);
14258 from = mac_map->get_expansion_point_location ();
14259 }
14260 }
14261 return from;
14262 }
14263
14264 static module_state **
14265 get_module_slot (tree name, module_state *parent, bool partition, bool insert)
14266 {
14267 module_state_hash::compare_type ct (name, uintptr_t (parent) | partition);
14268 hashval_t hv = module_state_hash::hash (ct);
14269
14270 return modules_hash->find_slot_with_hash (ct, hv, insert ? INSERT : NO_INSERT);
14271 }
14272
14273 static module_state *
14274 get_primary (module_state *parent)
14275 {
14276 while (parent->is_partition ())
14277 parent = parent->parent;
14278
14279 if (!parent->name)
14280 // Implementation unit has null name
14281 parent = parent->parent;
14282
14283 return parent;
14284 }
14285
14286 /* Find or create module NAME & PARENT in the hash table. */
14287
14288 module_state *
14289 get_module (tree name, module_state *parent, bool partition)
14290 {
14291 /* We might be given an empty NAME if preprocessing fails to handle
14292 a header-name token. */
14293 if (name && TREE_CODE (name) == STRING_CST
14294 && TREE_STRING_LENGTH (name) == 0)
14295 return nullptr;
14296
14297 if (partition)
14298 {
14299 if (!parent)
14300 parent = get_primary ((*modules)[0]);
14301
14302 if (!parent->is_partition () && !parent->flatname)
14303 parent->set_flatname ();
14304 }
14305
14306 module_state **slot = get_module_slot (name, parent, partition, true);
14307 module_state *state = *slot;
14308 if (!state)
14309 {
14310 state = (new (ggc_alloc<module_state> ())
14311 module_state (name, parent, partition));
14312 *slot = state;
14313 }
14314 return state;
14315 }
14316
14317 /* Process string name PTR into a module_state. */
14318
14319 static module_state *
14320 get_module (const char *ptr)
14321 {
14322 /* On DOS based file systems, there is an ambiguity with A:B which can be
14323 interpreted as a module Module:Partition or Drive:PATH. Interpret strings
14324 which clearly starts as pathnames as header-names and everything else is
14325 treated as a (possibly malformed) named moduled. */
14326 if (IS_DIR_SEPARATOR (ptr[ptr[0] == '.']) // ./FOO or /FOO
14327 #if HAVE_DOS_BASED_FILE_SYSTEM
14328 || (HAS_DRIVE_SPEC (ptr) && IS_DIR_SEPARATOR (ptr[2])) // A:/FOO
14329 #endif
14330 || false)
14331 /* A header name. */
14332 return get_module (build_string (strlen (ptr), ptr));
14333
14334 bool partition = false;
14335 module_state *mod = NULL;
14336
14337 for (const char *probe = ptr;; probe++)
14338 if (!*probe || *probe == '.' || *probe == ':')
14339 {
14340 if (probe == ptr)
14341 return NULL;
14342
14343 mod = get_module (get_identifier_with_length (ptr, probe - ptr),
14344 mod, partition);
14345 ptr = probe;
14346 if (*ptr == ':')
14347 {
14348 if (partition)
14349 return NULL;
14350 partition = true;
14351 }
14352
14353 if (!*ptr++)
14354 break;
14355 }
14356 else if (!(ISALPHA (*probe) || *probe == '_'
14357 || (probe != ptr && ISDIGIT (*probe))))
14358 return NULL;
14359
14360 return mod;
14361 }
14362
14363 /* Create a new mapper connecting to OPTION. */
14364
14365 module_client *
14366 make_mapper (location_t loc, class mkdeps *deps)
14367 {
14368 timevar_start (TV_MODULE_MAPPER);
14369 const char *option = module_mapper_name;
14370 if (!option)
14371 option = getenv ("CXX_MODULE_MAPPER");
14372
14373 mapper = module_client::open_module_client
14374 (loc, option, deps, &set_cmi_repo,
14375 (save_decoded_options[0].opt_index == OPT_SPECIAL_program_name)
14376 && save_decoded_options[0].arg != progname
14377 ? save_decoded_options[0].arg : nullptr);
14378
14379 timevar_stop (TV_MODULE_MAPPER);
14380
14381 return mapper;
14382 }
14383
14384 static unsigned lazy_snum;
14385
14386 static bool
14387 recursive_lazy (unsigned snum = ~0u)
14388 {
14389 if (lazy_snum)
14390 {
14391 error_at (input_location, "recursive lazy load");
14392 return true;
14393 }
14394
14395 lazy_snum = snum;
14396 return false;
14397 }
14398
14399 /* If THIS is the current purview, issue an import error and return false. */
14400
14401 bool
14402 module_state::check_not_purview (location_t from)
14403 {
14404 module_state *imp = (*modules)[0];
14405 if (imp && !imp->name)
14406 imp = imp->parent;
14407 if (imp == this)
14408 {
14409 /* Cannot import the current module. */
14410 error_at (from, "cannot import module in its own purview");
14411 inform (loc, "module %qs declared here", get_flatname ());
14412 return false;
14413 }
14414 return true;
14415 }
14416
14417 /* Module name substitutions. */
14418 static vec<module_state *,va_heap> substs;
14419
14420 void
14421 module_state::mangle (bool include_partition)
14422 {
14423 if (subst)
14424 mangle_module_substitution (subst);
14425 else
14426 {
14427 if (parent)
14428 parent->mangle (include_partition);
14429 if (include_partition || !is_partition ())
14430 {
14431 // Partitions are significant for global initializer
14432 // functions
14433 bool partition = is_partition () && !parent->is_partition ();
14434 subst = mangle_module_component (name, partition);
14435 substs.safe_push (this);
14436 }
14437 }
14438 }
14439
14440 void
14441 mangle_module (int mod, bool include_partition)
14442 {
14443 module_state *imp = (*modules)[mod];
14444
14445 gcc_checking_assert (!imp->is_header ());
14446
14447 if (!imp->name)
14448 /* Set when importing the primary module interface. */
14449 imp = imp->parent;
14450
14451 imp->mangle (include_partition);
14452 }
14453
14454 /* Clean up substitutions. */
14455 void
14456 mangle_module_fini ()
14457 {
14458 while (substs.length ())
14459 substs.pop ()->subst = 0;
14460 }
14461
14462 /* Announce WHAT about the module. */
14463
14464 void
14465 module_state::announce (const char *what) const
14466 {
14467 if (noisy_p ())
14468 {
14469 fprintf (stderr, " %s:%s", what, get_flatname ());
14470 fflush (stderr);
14471 }
14472 }
14473
14474 /* A human-readable README section. The contents of this section to
14475 not contribute to the CRC, so the contents can change per
14476 compilation. That allows us to embed CWD, hostname, build time and
14477 what not. It is a STRTAB that may be extracted with:
14478 readelf -pgnu.c++.README $(module).gcm */
14479
14480 void
14481 module_state::write_readme (elf_out *to, cpp_reader *reader, const char *dialect)
14482 {
14483 bytes_out readme (to);
14484
14485 readme.begin (false);
14486
14487 readme.printf ("GNU C++ %s",
14488 is_header () ? "header unit"
14489 : !is_partition () ? "primary interface"
14490 : is_interface () ? "interface partition"
14491 : "internal partition");
14492
14493 /* Compiler's version. */
14494 readme.printf ("compiler: %s", version_string);
14495
14496 /* Module format version. */
14497 verstr_t string;
14498 version2string (MODULE_VERSION, string);
14499 readme.printf ("version: %s", string);
14500
14501 /* Module information. */
14502 readme.printf ("module: %s", get_flatname ());
14503 readme.printf ("source: %s", main_input_filename);
14504 readme.printf ("dialect: %s", dialect);
14505 if (extensions)
14506 readme.printf ("extensions: %s",
14507 extensions & SE_OPENMP ? "-fopenmp" : "");
14508
14509 /* The following fields could be expected to change between
14510 otherwise identical compilations. Consider a distributed build
14511 system. We should have a way of overriding that. */
14512 if (char *cwd = getcwd (NULL, 0))
14513 {
14514 readme.printf ("cwd: %s", cwd);
14515 free (cwd);
14516 }
14517 readme.printf ("repository: %s", cmi_repo ? cmi_repo : ".");
14518 #if NETWORKING
14519 {
14520 char hostname[64];
14521 if (!gethostname (hostname, sizeof (hostname)))
14522 readme.printf ("host: %s", hostname);
14523 }
14524 #endif
14525 {
14526 /* This of course will change! */
14527 time_t stampy;
14528 auto kind = cpp_get_date (reader, &stampy);
14529 if (kind != CPP_time_kind::UNKNOWN)
14530 {
14531 struct tm *time;
14532
14533 time = gmtime (&stampy);
14534 readme.print_time ("build", time, "UTC");
14535
14536 if (kind == CPP_time_kind::DYNAMIC)
14537 {
14538 time = localtime (&stampy);
14539 readme.print_time ("local", time,
14540 #if defined (__USE_MISC) || defined (__USE_BSD) /* Is there a better way? */
14541 time->tm_zone
14542 #else
14543 ""
14544 #endif
14545 );
14546 }
14547 }
14548 }
14549
14550 /* Its direct imports. */
14551 for (unsigned ix = 1; ix < modules->length (); ix++)
14552 {
14553 module_state *state = (*modules)[ix];
14554
14555 if (state->is_direct ())
14556 readme.printf ("%s: %s %s", state->exported_p ? "export" : "import",
14557 state->get_flatname (), state->filename);
14558 }
14559
14560 readme.end (to, to->name (MOD_SNAME_PFX ".README"), NULL);
14561 }
14562
14563 /* Sort environment var names in reverse order. */
14564
14565 static int
14566 env_var_cmp (const void *a_, const void *b_)
14567 {
14568 const unsigned char *a = *(const unsigned char *const *)a_;
14569 const unsigned char *b = *(const unsigned char *const *)b_;
14570
14571 for (unsigned ix = 0; ; ix++)
14572 {
14573 bool a_end = !a[ix] || a[ix] == '=';
14574 if (a[ix] == b[ix])
14575 {
14576 if (a_end)
14577 break;
14578 }
14579 else
14580 {
14581 bool b_end = !b[ix] || b[ix] == '=';
14582
14583 if (!a_end && !b_end)
14584 return a[ix] < b[ix] ? +1 : -1;
14585 if (a_end && b_end)
14586 break;
14587 return a_end ? +1 : -1;
14588 }
14589 }
14590
14591 return 0;
14592 }
14593
14594 /* Write the environment. It is a STRTAB that may be extracted with:
14595 readelf -pgnu.c++.ENV $(module).gcm */
14596
14597 void
14598 module_state::write_env (elf_out *to)
14599 {
14600 vec<const char *> vars;
14601 vars.create (20);
14602
14603 extern char **environ;
14604 while (const char *var = environ[vars.length ()])
14605 vars.safe_push (var);
14606 vars.qsort (env_var_cmp);
14607
14608 bytes_out env (to);
14609 env.begin (false);
14610 while (vars.length ())
14611 env.printf ("%s", vars.pop ());
14612 env.end (to, to->name (MOD_SNAME_PFX ".ENV"), NULL);
14613
14614 vars.release ();
14615 }
14616
14617 /* Write the direct or indirect imports.
14618 u:N
14619 {
14620 u:index
14621 s:name
14622 u32:crc
14623 s:filename (direct)
14624 u:exported (direct)
14625 } imports[N]
14626 */
14627
14628 void
14629 module_state::write_imports (bytes_out &sec, bool direct)
14630 {
14631 unsigned count = 0;
14632
14633 for (unsigned ix = 1; ix < modules->length (); ix++)
14634 {
14635 module_state *imp = (*modules)[ix];
14636
14637 if (imp->remap && imp->is_direct () == direct)
14638 count++;
14639 }
14640
14641 gcc_assert (!direct || count);
14642
14643 sec.u (count);
14644 for (unsigned ix = 1; ix < modules->length (); ix++)
14645 {
14646 module_state *imp = (*modules)[ix];
14647
14648 if (imp->remap && imp->is_direct () == direct)
14649 {
14650 dump () && dump ("Writing %simport:%u->%u %M (crc=%x)",
14651 !direct ? "indirect "
14652 : imp->exported_p ? "exported " : "",
14653 ix, imp->remap, imp, imp->crc);
14654 sec.u (imp->remap);
14655 sec.str (imp->get_flatname ());
14656 sec.u32 (imp->crc);
14657 if (direct)
14658 {
14659 write_location (sec, imp->imported_from ());
14660 sec.str (imp->filename);
14661 int exportedness = 0;
14662 if (imp->exported_p)
14663 exportedness = +1;
14664 else if (!imp->is_purview_direct ())
14665 exportedness = -1;
14666 sec.i (exportedness);
14667 }
14668 }
14669 }
14670 }
14671
14672 /* READER, LMAPS != NULL == direct imports,
14673 == NUL == indirect imports. */
14674
14675 unsigned
14676 module_state::read_imports (bytes_in &sec, cpp_reader *reader, line_maps *lmaps)
14677 {
14678 unsigned count = sec.u ();
14679 unsigned loaded = 0;
14680
14681 while (count--)
14682 {
14683 unsigned ix = sec.u ();
14684 if (ix >= slurp->remap->length () || !ix || (*slurp->remap)[ix])
14685 {
14686 sec.set_overrun ();
14687 break;
14688 }
14689
14690 const char *name = sec.str (NULL);
14691 module_state *imp = get_module (name);
14692 unsigned crc = sec.u32 ();
14693 int exportedness = 0;
14694
14695 /* If the import is a partition, it must be the same primary
14696 module as this TU. */
14697 if (imp && imp->is_partition () &&
14698 (!named_module_p ()
14699 || (get_primary ((*modules)[0]) != get_primary (imp))))
14700 imp = NULL;
14701
14702 if (!imp)
14703 sec.set_overrun ();
14704 if (sec.get_overrun ())
14705 break;
14706
14707 if (lmaps)
14708 {
14709 /* A direct import, maybe load it. */
14710 location_t floc = read_location (sec);
14711 const char *fname = sec.str (NULL);
14712 exportedness = sec.i ();
14713
14714 if (sec.get_overrun ())
14715 break;
14716
14717 if (!imp->check_not_purview (loc))
14718 continue;
14719
14720 if (imp->loadedness == ML_NONE)
14721 {
14722 imp->loc = floc;
14723 imp->crc = crc;
14724 if (!imp->get_flatname ())
14725 imp->set_flatname ();
14726
14727 unsigned n = dump.push (imp);
14728
14729 if (!imp->filename && fname)
14730 imp->filename = xstrdup (fname);
14731
14732 if (imp->is_partition ())
14733 dump () && dump ("Importing elided partition %M", imp);
14734
14735 if (!imp->do_import (reader, false))
14736 imp = NULL;
14737 dump.pop (n);
14738 if (!imp)
14739 continue;
14740 }
14741
14742 if (is_partition ())
14743 {
14744 if (!imp->is_direct ())
14745 imp->directness = MD_PARTITION_DIRECT;
14746 if (exportedness > 0)
14747 imp->exported_p = true;
14748 }
14749 }
14750 else
14751 {
14752 /* An indirect import, find it, it should already be here. */
14753 if (imp->loadedness == ML_NONE)
14754 {
14755 error_at (loc, "indirect import %qs is not already loaded", name);
14756 continue;
14757 }
14758 }
14759
14760 if (imp->crc != crc)
14761 error_at (loc, "import %qs has CRC mismatch", imp->get_flatname ());
14762
14763 (*slurp->remap)[ix] = (imp->mod << 1) | (lmaps != NULL);
14764
14765 if (lmaps && exportedness >= 0)
14766 set_import (imp, bool (exportedness));
14767 dump () && dump ("Found %simport:%u %M->%u", !lmaps ? "indirect "
14768 : exportedness > 0 ? "exported "
14769 : exportedness < 0 ? "gmf" : "", ix, imp,
14770 imp->mod);
14771 loaded++;
14772 }
14773
14774 return loaded;
14775 }
14776
14777 /* Write the import table to MOD_SNAME_PFX.imp. */
14778
14779 void
14780 module_state::write_imports (elf_out *to, unsigned *crc_ptr)
14781 {
14782 dump () && dump ("Writing imports");
14783 dump.indent ();
14784
14785 bytes_out sec (to);
14786 sec.begin ();
14787
14788 write_imports (sec, true);
14789 write_imports (sec, false);
14790
14791 sec.end (to, to->name (MOD_SNAME_PFX ".imp"), crc_ptr);
14792 dump.outdent ();
14793 }
14794
14795 bool
14796 module_state::read_imports (cpp_reader *reader, line_maps *lmaps)
14797 {
14798 bytes_in sec;
14799
14800 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".imp"))
14801 return false;
14802
14803 dump () && dump ("Reading %u imports", slurp->remap->length () - 1);
14804 dump.indent ();
14805
14806 /* Read the imports. */
14807 unsigned direct = read_imports (sec, reader, lmaps);
14808 unsigned indirect = read_imports (sec, NULL, NULL);
14809 if (direct + indirect + 1 != slurp->remap->length ())
14810 from ()->set_error (elf::E_BAD_IMPORT);
14811
14812 dump.outdent ();
14813 if (!sec.end (from ()))
14814 return false;
14815 return true;
14816 }
14817
14818 /* We're the primary module interface, but have partitions. Document
14819 them so that non-partition module implementation units know which
14820 have already been loaded. */
14821
14822 void
14823 module_state::write_partitions (elf_out *to, unsigned count, unsigned *crc_ptr)
14824 {
14825 dump () && dump ("Writing %u elided partitions", count);
14826 dump.indent ();
14827
14828 bytes_out sec (to);
14829 sec.begin ();
14830
14831 for (unsigned ix = 1; ix != modules->length (); ix++)
14832 {
14833 module_state *imp = (*modules)[ix];
14834 if (imp->is_partition ())
14835 {
14836 dump () && dump ("Writing elided partition %M (crc=%x)",
14837 imp, imp->crc);
14838 sec.str (imp->get_flatname ());
14839 sec.u32 (imp->crc);
14840 write_location (sec, imp->is_direct ()
14841 ? imp->imported_from () : UNKNOWN_LOCATION);
14842 sec.str (imp->filename);
14843 }
14844 }
14845
14846 sec.end (to, to->name (MOD_SNAME_PFX ".prt"), crc_ptr);
14847 dump.outdent ();
14848 }
14849
14850 bool
14851 module_state::read_partitions (unsigned count)
14852 {
14853 bytes_in sec;
14854 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".prt"))
14855 return false;
14856
14857 dump () && dump ("Reading %u elided partitions", count);
14858 dump.indent ();
14859
14860 while (count--)
14861 {
14862 const char *name = sec.str (NULL);
14863 unsigned crc = sec.u32 ();
14864 location_t floc = read_location (sec);
14865 const char *fname = sec.str (NULL);
14866
14867 if (sec.get_overrun ())
14868 break;
14869
14870 dump () && dump ("Reading elided partition %s (crc=%x)", name, crc);
14871
14872 module_state *imp = get_module (name);
14873 if (!imp /* Partition should be ... */
14874 || !imp->is_partition () /* a partition ... */
14875 || imp->loadedness != ML_NONE /* that is not yet loaded ... */
14876 || get_primary (imp) != this) /* whose primary is this. */
14877 {
14878 sec.set_overrun ();
14879 break;
14880 }
14881
14882 if (!imp->has_location ())
14883 imp->loc = floc;
14884 imp->crc = crc;
14885 if (!imp->filename && fname[0])
14886 imp->filename = xstrdup (fname);
14887 }
14888
14889 dump.outdent ();
14890 if (!sec.end (from ()))
14891 return false;
14892 return true;
14893 }
14894
14895 /* Data for config reading and writing. */
14896 struct module_state_config {
14897 const char *dialect_str;
14898 unsigned num_imports;
14899 unsigned num_partitions;
14900 unsigned num_entities;
14901 unsigned ordinary_locs;
14902 unsigned macro_locs;
14903 unsigned loc_range_bits;
14904 unsigned active_init;
14905
14906 public:
14907 module_state_config ()
14908 :dialect_str (get_dialect ()),
14909 num_imports (0), num_partitions (0), num_entities (0),
14910 ordinary_locs (0), macro_locs (0), loc_range_bits (0),
14911 active_init (0)
14912 {
14913 }
14914
14915 static void release ()
14916 {
14917 XDELETEVEC (dialect);
14918 dialect = NULL;
14919 }
14920
14921 private:
14922 static const char *get_dialect ();
14923 static char *dialect;
14924 };
14925
14926 char *module_state_config::dialect;
14927
14928 /* Generate a string of the significant compilation options.
14929 Generally assume the user knows what they're doing, in the same way
14930 that object files can be mixed. */
14931
14932 const char *
14933 module_state_config::get_dialect ()
14934 {
14935 if (!dialect)
14936 dialect = concat (get_cxx_dialect_name (cxx_dialect),
14937 /* C++ implies these, only show if disabled. */
14938 flag_exceptions ? "" : "/no-exceptions",
14939 flag_rtti ? "" : "/no-rtti",
14940 flag_new_inheriting_ctors ? "" : "/old-inheriting-ctors",
14941 /* C++ 20 implies concepts. */
14942 cxx_dialect < cxx20 && flag_concepts ? "/concepts" : "",
14943 flag_coroutines ? "/coroutines" : "",
14944 flag_module_implicit_inline ? "/implicit-inline" : "",
14945 flag_contracts ? "/contracts" : "",
14946 NULL);
14947
14948 return dialect;
14949 }
14950
14951 /* Contents of a cluster. */
14952 enum cluster_tag {
14953 ct_decl, /* A decl. */
14954 ct_defn, /* A definition. */
14955 ct_bind, /* A binding. */
14956 ct_hwm
14957 };
14958
14959 /* Binding modifiers. */
14960 enum ct_bind_flags
14961 {
14962 cbf_export = 0x1, /* An exported decl. */
14963 cbf_hidden = 0x2, /* A hidden (friend) decl. */
14964 cbf_using = 0x4, /* A using decl. */
14965 cbf_wrapped = 0x8, /* ... that is wrapped. */
14966 };
14967
14968 /* DEP belongs to a different cluster, seed it to prevent
14969 unfortunately timed duplicate import. */
14970 // FIXME: QOI For inter-cluster references we could just only pick
14971 // one entity from an earlier cluster. Even better track
14972 // dependencies between earlier clusters
14973
14974 void
14975 module_state::intercluster_seed (trees_out &sec, unsigned index_hwm, depset *dep)
14976 {
14977 if (dep->is_import ()
14978 || dep->cluster < index_hwm)
14979 {
14980 tree ent = dep->get_entity ();
14981 if (!TREE_VISITED (ent))
14982 {
14983 sec.tree_node (ent);
14984 dump (dumper::CLUSTER)
14985 && dump ("Seeded %s %N",
14986 dep->is_import () ? "import" : "intercluster", ent);
14987 }
14988 }
14989 }
14990
14991 /* Write the cluster of depsets in SCC[0-SIZE).
14992 dep->section -> section number
14993 dep->cluster -> entity number
14994 */
14995
14996 unsigned
14997 module_state::write_cluster (elf_out *to, depset *scc[], unsigned size,
14998 depset::hash &table, unsigned *counts,
14999 unsigned *crc_ptr)
15000 {
15001 dump () && dump ("Writing section:%u %u depsets", table.section, size);
15002 dump.indent ();
15003
15004 trees_out sec (to, this, table, table.section);
15005 sec.begin ();
15006 unsigned index_lwm = counts[MSC_entities];
15007
15008 /* Determine entity numbers, mark for writing. */
15009 dump (dumper::CLUSTER) && dump ("Cluster members:") && (dump.indent (), true);
15010 for (unsigned ix = 0; ix != size; ix++)
15011 {
15012 depset *b = scc[ix];
15013
15014 switch (b->get_entity_kind ())
15015 {
15016 default:
15017 gcc_unreachable ();
15018
15019 case depset::EK_BINDING:
15020 {
15021 dump (dumper::CLUSTER)
15022 && dump ("[%u]=%s %P", ix, b->entity_kind_name (),
15023 b->get_entity (), b->get_name ());
15024 depset *ns_dep = b->deps[0];
15025 gcc_checking_assert (ns_dep->get_entity_kind ()
15026 == depset::EK_NAMESPACE
15027 && ns_dep->get_entity () == b->get_entity ());
15028 for (unsigned jx = b->deps.length (); --jx;)
15029 {
15030 depset *dep = b->deps[jx];
15031 // We could be declaring something that is also a
15032 // (merged) import
15033 gcc_checking_assert (dep->is_import ()
15034 || TREE_VISITED (dep->get_entity ())
15035 || (dep->get_entity_kind ()
15036 == depset::EK_USING));
15037 }
15038 }
15039 break;
15040
15041 case depset::EK_DECL:
15042 case depset::EK_SPECIALIZATION:
15043 case depset::EK_PARTIAL:
15044 b->cluster = counts[MSC_entities]++;
15045 sec.mark_declaration (b->get_entity (), b->has_defn ());
15046 /* FALLTHROUGH */
15047
15048 case depset::EK_USING:
15049 gcc_checking_assert (!b->is_import ()
15050 && !b->is_unreached ());
15051 dump (dumper::CLUSTER)
15052 && dump ("[%u]=%s %s %N", ix, b->entity_kind_name (),
15053 b->has_defn () ? "definition" : "declaration",
15054 b->get_entity ());
15055 break;
15056 }
15057 }
15058 dump (dumper::CLUSTER) && (dump.outdent (), true);
15059
15060 /* Ensure every out-of-cluster decl is referenced before we start
15061 streaming. We must do both imports *and* earlier clusters,
15062 because the latter could reach into the former and cause a
15063 duplicate loop. */
15064 sec.set_importing (+1);
15065 for (unsigned ix = 0; ix != size; ix++)
15066 {
15067 depset *b = scc[ix];
15068 for (unsigned jx = b->is_special (); jx != b->deps.length (); jx++)
15069 {
15070 depset *dep = b->deps[jx];
15071
15072 if (dep->is_binding ())
15073 {
15074 for (unsigned ix = dep->deps.length (); --ix;)
15075 {
15076 depset *bind = dep->deps[ix];
15077 if (bind->get_entity_kind () == depset::EK_USING)
15078 bind = bind->deps[1];
15079
15080 intercluster_seed (sec, index_lwm, bind);
15081 }
15082 /* Also check the namespace itself. */
15083 dep = dep->deps[0];
15084 }
15085
15086 intercluster_seed (sec, index_lwm, dep);
15087 }
15088 }
15089 sec.tree_node (NULL_TREE);
15090 /* We're done importing now. */
15091 sec.set_importing (-1);
15092
15093 /* Write non-definitions. */
15094 for (unsigned ix = 0; ix != size; ix++)
15095 {
15096 depset *b = scc[ix];
15097 tree decl = b->get_entity ();
15098 switch (b->get_entity_kind ())
15099 {
15100 default:
15101 gcc_unreachable ();
15102 break;
15103
15104 case depset::EK_BINDING:
15105 {
15106 gcc_assert (TREE_CODE (decl) == NAMESPACE_DECL);
15107 dump () && dump ("Depset:%u binding %C:%P", ix, TREE_CODE (decl),
15108 decl, b->get_name ());
15109 sec.u (ct_bind);
15110 sec.tree_node (decl);
15111 sec.tree_node (b->get_name ());
15112
15113 /* Write in reverse order, so reading will see the exports
15114 first, thus building the overload chain will be
15115 optimized. */
15116 for (unsigned jx = b->deps.length (); --jx;)
15117 {
15118 depset *dep = b->deps[jx];
15119 tree bound = dep->get_entity ();
15120 unsigned flags = 0;
15121 if (dep->get_entity_kind () == depset::EK_USING)
15122 {
15123 tree ovl = bound;
15124 bound = OVL_FUNCTION (bound);
15125 if (!(TREE_CODE (bound) == CONST_DECL
15126 && UNSCOPED_ENUM_P (TREE_TYPE (bound))
15127 && decl == TYPE_NAME (TREE_TYPE (bound))))
15128 {
15129 /* An unscope enumerator in its enumeration's
15130 scope is not a using. */
15131 flags |= cbf_using;
15132 if (OVL_USING_P (ovl))
15133 flags |= cbf_wrapped;
15134 }
15135 if (OVL_EXPORT_P (ovl))
15136 flags |= cbf_export;
15137 }
15138 else
15139 {
15140 /* An implicit typedef must be at one. */
15141 gcc_assert (!DECL_IMPLICIT_TYPEDEF_P (bound) || jx == 1);
15142 if (dep->is_hidden ())
15143 flags |= cbf_hidden;
15144 else if (DECL_MODULE_EXPORT_P (STRIP_TEMPLATE (bound)))
15145 flags |= cbf_export;
15146 }
15147
15148 gcc_checking_assert (DECL_P (bound));
15149
15150 sec.i (flags);
15151 sec.tree_node (bound);
15152 }
15153
15154 /* Terminate the list. */
15155 sec.i (-1);
15156 }
15157 break;
15158
15159 case depset::EK_USING:
15160 dump () && dump ("Depset:%u %s %C:%N", ix, b->entity_kind_name (),
15161 TREE_CODE (decl), decl);
15162 break;
15163
15164 case depset::EK_SPECIALIZATION:
15165 case depset::EK_PARTIAL:
15166 case depset::EK_DECL:
15167 dump () && dump ("Depset:%u %s entity:%u %C:%N", ix,
15168 b->entity_kind_name (), b->cluster,
15169 TREE_CODE (decl), decl);
15170
15171 sec.u (ct_decl);
15172 sec.tree_node (decl);
15173
15174 dump () && dump ("Wrote declaration entity:%u %C:%N",
15175 b->cluster, TREE_CODE (decl), decl);
15176 break;
15177 }
15178 }
15179
15180 depset *namer = NULL;
15181
15182 /* Write out definitions */
15183 for (unsigned ix = 0; ix != size; ix++)
15184 {
15185 depset *b = scc[ix];
15186 tree decl = b->get_entity ();
15187 switch (b->get_entity_kind ())
15188 {
15189 default:
15190 break;
15191
15192 case depset::EK_SPECIALIZATION:
15193 case depset::EK_PARTIAL:
15194 case depset::EK_DECL:
15195 if (!namer)
15196 namer = b;
15197
15198 if (b->has_defn ())
15199 {
15200 sec.u (ct_defn);
15201 sec.tree_node (decl);
15202 dump () && dump ("Writing definition %N", decl);
15203 sec.write_definition (decl);
15204
15205 if (!namer->has_defn ())
15206 namer = b;
15207 }
15208 break;
15209 }
15210 }
15211
15212 /* We don't find the section by name. Use depset's decl's name for
15213 human friendliness. */
15214 unsigned name = 0;
15215 tree naming_decl = NULL_TREE;
15216 if (namer)
15217 {
15218 naming_decl = namer->get_entity ();
15219 if (namer->get_entity_kind () == depset::EK_USING)
15220 /* This unfortunately names the section from the target of the
15221 using decl. But the name is only a guide, so Do Not Care. */
15222 naming_decl = OVL_FUNCTION (naming_decl);
15223 if (DECL_IMPLICIT_TYPEDEF_P (naming_decl))
15224 /* Lose any anonymousness. */
15225 naming_decl = TYPE_NAME (TREE_TYPE (naming_decl));
15226 name = to->qualified_name (naming_decl, namer->has_defn ());
15227 }
15228
15229 unsigned bytes = sec.pos;
15230 unsigned snum = sec.end (to, name, crc_ptr);
15231
15232 for (unsigned ix = size; ix--;)
15233 gcc_checking_assert (scc[ix]->section == snum);
15234
15235 dump.outdent ();
15236 dump () && dump ("Wrote section:%u named-by:%N", table.section, naming_decl);
15237
15238 return bytes;
15239 }
15240
15241 /* Read a cluster from section SNUM. */
15242
15243 bool
15244 module_state::read_cluster (unsigned snum)
15245 {
15246 trees_in sec (this);
15247
15248 if (!sec.begin (loc, from (), snum))
15249 return false;
15250
15251 dump () && dump ("Reading section:%u", snum);
15252 dump.indent ();
15253
15254 /* We care about structural equality. */
15255 comparing_dependent_aliases++;
15256
15257 /* First seed the imports. */
15258 while (tree import = sec.tree_node ())
15259 dump (dumper::CLUSTER) && dump ("Seeded import %N", import);
15260
15261 while (!sec.get_overrun () && sec.more_p ())
15262 {
15263 unsigned ct = sec.u ();
15264 switch (ct)
15265 {
15266 default:
15267 sec.set_overrun ();
15268 break;
15269
15270 case ct_bind:
15271 /* A set of namespace bindings. */
15272 {
15273 tree ns = sec.tree_node ();
15274 tree name = sec.tree_node ();
15275 tree decls = NULL_TREE;
15276 tree visible = NULL_TREE;
15277 tree type = NULL_TREE;
15278 bool dedup = false;
15279
15280 /* We rely on the bindings being in the reverse order of
15281 the resulting overload set. */
15282 for (;;)
15283 {
15284 int flags = sec.i ();
15285 if (flags < 0)
15286 break;
15287
15288 if ((flags & cbf_hidden)
15289 && (flags & (cbf_using | cbf_export)))
15290 sec.set_overrun ();
15291
15292 tree decl = sec.tree_node ();
15293 if (sec.get_overrun ())
15294 break;
15295
15296 if (decls && TREE_CODE (decl) == TYPE_DECL)
15297 {
15298 /* Stat hack. */
15299 if (type || !DECL_IMPLICIT_TYPEDEF_P (decl))
15300 sec.set_overrun ();
15301 type = decl;
15302 }
15303 else
15304 {
15305 if (decls
15306 || (flags & (cbf_hidden | cbf_wrapped))
15307 || DECL_FUNCTION_TEMPLATE_P (decl))
15308 {
15309 decls = ovl_make (decl, decls);
15310 if (flags & cbf_using)
15311 {
15312 dedup = true;
15313 OVL_USING_P (decls) = true;
15314 if (flags & cbf_export)
15315 OVL_EXPORT_P (decls) = true;
15316 }
15317
15318 if (flags & cbf_hidden)
15319 OVL_HIDDEN_P (decls) = true;
15320 else if (dedup)
15321 OVL_DEDUP_P (decls) = true;
15322 }
15323 else
15324 decls = decl;
15325
15326 if (flags & cbf_export
15327 || (!(flags & cbf_hidden)
15328 && (is_module () || is_partition ())))
15329 visible = decls;
15330 }
15331 }
15332
15333 if (!decls)
15334 sec.set_overrun ();
15335
15336 if (sec.get_overrun ())
15337 break; /* Bail. */
15338
15339 dump () && dump ("Binding of %P", ns, name);
15340 if (!set_module_binding (ns, name, mod,
15341 is_header () ? -1
15342 : is_module () || is_partition () ? 1
15343 : 0,
15344 decls, type, visible))
15345 sec.set_overrun ();
15346 }
15347 break;
15348
15349 case ct_decl:
15350 /* A decl. */
15351 {
15352 tree decl = sec.tree_node ();
15353 dump () && dump ("Read declaration of %N", decl);
15354 }
15355 break;
15356
15357 case ct_defn:
15358 {
15359 tree decl = sec.tree_node ();
15360 dump () && dump ("Reading definition of %N", decl);
15361 sec.read_definition (decl);
15362 }
15363 break;
15364 }
15365 }
15366
15367 /* When lazy loading is in effect, we can be in the middle of
15368 parsing or instantiating a function. Save it away.
15369 push_function_context does too much work. */
15370 tree old_cfd = current_function_decl;
15371 struct function *old_cfun = cfun;
15372 for (const post_process_data& pdata : sec.post_process ())
15373 {
15374 tree decl = pdata.decl;
15375
15376 bool abstract = false;
15377 if (TREE_CODE (decl) == TEMPLATE_DECL)
15378 {
15379 abstract = true;
15380 decl = DECL_TEMPLATE_RESULT (decl);
15381 }
15382
15383 current_function_decl = decl;
15384 allocate_struct_function (decl, abstract);
15385 cfun->language = ggc_cleared_alloc<language_function> ();
15386 cfun->language->base.x_stmt_tree.stmts_are_full_exprs_p = 1;
15387 cfun->function_start_locus = pdata.start_locus;
15388 cfun->function_end_locus = pdata.end_locus;
15389
15390 if (abstract)
15391 ;
15392 else if (DECL_ABSTRACT_P (decl))
15393 vec_safe_push (post_load_decls, decl);
15394 else
15395 {
15396 bool aggr = aggregate_value_p (DECL_RESULT (decl), decl);
15397 #ifdef PCC_STATIC_STRUCT_RETURN
15398 cfun->returns_pcc_struct = aggr;
15399 #endif
15400 cfun->returns_struct = aggr;
15401
15402 if (DECL_COMDAT (decl))
15403 // FIXME: Comdat grouping?
15404 comdat_linkage (decl);
15405 note_vague_linkage_fn (decl);
15406 cgraph_node::finalize_function (decl, true);
15407 }
15408
15409 }
15410 /* Look, function.cc's interface to cfun does too much for us, we
15411 just need to restore the old value. I do not want to go
15412 redesigning that API right now. */
15413 #undef cfun
15414 cfun = old_cfun;
15415 current_function_decl = old_cfd;
15416 comparing_dependent_aliases--;
15417
15418 dump.outdent ();
15419 dump () && dump ("Read section:%u", snum);
15420
15421 loaded_clusters++;
15422
15423 if (!sec.end (from ()))
15424 return false;
15425
15426 return true;
15427 }
15428
15429 void
15430 module_state::write_namespace (bytes_out &sec, depset *dep)
15431 {
15432 unsigned ns_num = dep->cluster;
15433 unsigned ns_import = 0;
15434
15435 if (dep->is_import ())
15436 ns_import = dep->section;
15437 else if (dep->get_entity () != global_namespace)
15438 ns_num++;
15439
15440 sec.u (ns_import);
15441 sec.u (ns_num);
15442 }
15443
15444 tree
15445 module_state::read_namespace (bytes_in &sec)
15446 {
15447 unsigned ns_import = sec.u ();
15448 unsigned ns_num = sec.u ();
15449 tree ns = NULL_TREE;
15450
15451 if (ns_import || ns_num)
15452 {
15453 if (!ns_import)
15454 ns_num--;
15455
15456 if (unsigned origin = slurp->remap_module (ns_import))
15457 {
15458 module_state *from = (*modules)[origin];
15459 if (ns_num < from->entity_num)
15460 {
15461 binding_slot &slot = (*entity_ary)[from->entity_lwm + ns_num];
15462
15463 if (!slot.is_lazy ())
15464 ns = slot;
15465 }
15466 }
15467 else
15468 sec.set_overrun ();
15469 }
15470 else
15471 ns = global_namespace;
15472
15473 return ns;
15474 }
15475
15476 /* SPACES is a sorted vector of namespaces. Write out the namespaces
15477 to MOD_SNAME_PFX.nms section. */
15478
15479 void
15480 module_state::write_namespaces (elf_out *to, vec<depset *> spaces,
15481 unsigned num, unsigned *crc_p)
15482 {
15483 dump () && dump ("Writing namespaces");
15484 dump.indent ();
15485
15486 bytes_out sec (to);
15487 sec.begin ();
15488
15489 for (unsigned ix = 0; ix != num; ix++)
15490 {
15491 depset *b = spaces[ix];
15492 tree ns = b->get_entity ();
15493
15494 gcc_checking_assert (TREE_CODE (ns) == NAMESPACE_DECL);
15495 /* P1815 may have something to say about this. */
15496 gcc_checking_assert (TREE_PUBLIC (ns));
15497
15498 unsigned flags = 0;
15499 if (TREE_PUBLIC (ns))
15500 flags |= 1;
15501 if (DECL_NAMESPACE_INLINE_P (ns))
15502 flags |= 2;
15503 if (DECL_MODULE_PURVIEW_P (ns))
15504 flags |= 4;
15505 if (DECL_MODULE_EXPORT_P (ns))
15506 flags |= 8;
15507
15508 dump () && dump ("Writing namespace:%u %N%s%s%s%s",
15509 b->cluster, ns,
15510 flags & 1 ? ", public" : "",
15511 flags & 2 ? ", inline" : "",
15512 flags & 4 ? ", purview" : "",
15513 flags & 8 ? ", export" : "");
15514 sec.u (b->cluster);
15515 sec.u (to->name (DECL_NAME (ns)));
15516 write_namespace (sec, b->deps[0]);
15517
15518 sec.u (flags);
15519 write_location (sec, DECL_SOURCE_LOCATION (ns));
15520
15521 if (DECL_NAMESPACE_INLINE_P (ns))
15522 {
15523 if (tree attr = lookup_attribute ("abi_tag", DECL_ATTRIBUTES (ns)))
15524 {
15525 tree tags = TREE_VALUE (attr);
15526 sec.u (list_length (tags));
15527 for (tree tag = tags; tag; tag = TREE_CHAIN (tag))
15528 sec.str (TREE_STRING_POINTER (TREE_VALUE (tag)));
15529 }
15530 else
15531 sec.u (0);
15532 }
15533 }
15534
15535 sec.end (to, to->name (MOD_SNAME_PFX ".nms"), crc_p);
15536 dump.outdent ();
15537 }
15538
15539 /* Read the namespace hierarchy from MOD_SNAME_PFX.namespace. Fill in
15540 SPACES from that data. */
15541
15542 bool
15543 module_state::read_namespaces (unsigned num)
15544 {
15545 bytes_in sec;
15546
15547 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".nms"))
15548 return false;
15549
15550 dump () && dump ("Reading namespaces");
15551 dump.indent ();
15552
15553 for (unsigned ix = 0; ix != num; ix++)
15554 {
15555 unsigned entity_index = sec.u ();
15556 unsigned name = sec.u ();
15557
15558 tree parent = read_namespace (sec);
15559
15560 /* See comment in write_namespace about why not bits. */
15561 unsigned flags = sec.u ();
15562 location_t src_loc = read_location (sec);
15563 unsigned tags_count = (flags & 2) ? sec.u () : 0;
15564
15565 if (entity_index >= entity_num
15566 || !parent
15567 || (flags & 0xc) == 0x8)
15568 sec.set_overrun ();
15569
15570 tree tags = NULL_TREE;
15571 while (tags_count--)
15572 {
15573 size_t len;
15574 const char *str = sec.str (&len);
15575 tags = tree_cons (NULL_TREE, build_string (len + 1, str), tags);
15576 tags = nreverse (tags);
15577 }
15578
15579 if (sec.get_overrun ())
15580 break;
15581
15582 tree id = name ? get_identifier (from ()->name (name)) : NULL_TREE;
15583
15584 dump () && dump ("Read namespace:%u %P%s%s%s%s",
15585 entity_index, parent, id,
15586 flags & 1 ? ", public" : "",
15587 flags & 2 ? ", inline" : "",
15588 flags & 4 ? ", purview" : "",
15589 flags & 8 ? ", export" : "");
15590 bool visible_p = ((flags & 8)
15591 || ((flags & 1)
15592 && (flags & 4)
15593 && (is_partition () || is_module ())));
15594 tree inner = add_imported_namespace (parent, id, src_loc, mod,
15595 bool (flags & 2), visible_p);
15596 if (!inner)
15597 {
15598 sec.set_overrun ();
15599 break;
15600 }
15601
15602 if (is_partition ())
15603 {
15604 if (flags & 4)
15605 DECL_MODULE_PURVIEW_P (inner) = true;
15606 if (flags & 8)
15607 DECL_MODULE_EXPORT_P (inner) = true;
15608 }
15609
15610 if (tags)
15611 DECL_ATTRIBUTES (inner)
15612 = tree_cons (get_identifier ("abi_tag"), tags, DECL_ATTRIBUTES (inner));
15613
15614 /* Install the namespace. */
15615 (*entity_ary)[entity_lwm + entity_index] = inner;
15616 if (DECL_MODULE_IMPORT_P (inner))
15617 {
15618 bool existed;
15619 unsigned *slot = &entity_map->get_or_insert
15620 (DECL_UID (inner), &existed);
15621 if (existed)
15622 /* If it existed, it should match. */
15623 gcc_checking_assert (inner == (*entity_ary)[*slot]);
15624 else
15625 *slot = entity_lwm + entity_index;
15626 }
15627 }
15628 dump.outdent ();
15629 if (!sec.end (from ()))
15630 return false;
15631 return true;
15632 }
15633
15634 /* Write the binding TABLE to MOD_SNAME_PFX.bnd */
15635
15636 unsigned
15637 module_state::write_bindings (elf_out *to, vec<depset *> sccs, unsigned *crc_p)
15638 {
15639 dump () && dump ("Writing binding table");
15640 dump.indent ();
15641
15642 unsigned num = 0;
15643 bytes_out sec (to);
15644 sec.begin ();
15645
15646 for (unsigned ix = 0; ix != sccs.length (); ix++)
15647 {
15648 depset *b = sccs[ix];
15649 if (b->is_binding ())
15650 {
15651 tree ns = b->get_entity ();
15652 dump () && dump ("Bindings %P section:%u", ns, b->get_name (),
15653 b->section);
15654 sec.u (to->name (b->get_name ()));
15655 write_namespace (sec, b->deps[0]);
15656 sec.u (b->section);
15657 num++;
15658 }
15659 }
15660
15661 sec.end (to, to->name (MOD_SNAME_PFX ".bnd"), crc_p);
15662 dump.outdent ();
15663
15664 return num;
15665 }
15666
15667 /* Read the binding table from MOD_SNAME_PFX.bind. */
15668
15669 bool
15670 module_state::read_bindings (unsigned num, unsigned lwm, unsigned hwm)
15671 {
15672 bytes_in sec;
15673
15674 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".bnd"))
15675 return false;
15676
15677 dump () && dump ("Reading binding table");
15678 dump.indent ();
15679 for (; !sec.get_overrun () && num--;)
15680 {
15681 const char *name = from ()->name (sec.u ());
15682 tree ns = read_namespace (sec);
15683 unsigned snum = sec.u ();
15684
15685 if (!ns || !name || (snum - lwm) >= (hwm - lwm))
15686 sec.set_overrun ();
15687 if (!sec.get_overrun ())
15688 {
15689 tree id = get_identifier (name);
15690 dump () && dump ("Bindings %P section:%u", ns, id, snum);
15691 if (mod && !import_module_binding (ns, id, mod, snum))
15692 break;
15693 }
15694 }
15695
15696 dump.outdent ();
15697 if (!sec.end (from ()))
15698 return false;
15699 return true;
15700 }
15701
15702 /* Write the entity table to MOD_SNAME_PFX.ent
15703
15704 Each entry is a section number. */
15705
15706 void
15707 module_state::write_entities (elf_out *to, vec<depset *> depsets,
15708 unsigned count, unsigned *crc_p)
15709 {
15710 dump () && dump ("Writing entities");
15711 dump.indent ();
15712
15713 bytes_out sec (to);
15714 sec.begin ();
15715
15716 unsigned current = 0;
15717 for (unsigned ix = 0; ix < depsets.length (); ix++)
15718 {
15719 depset *d = depsets[ix];
15720
15721 switch (d->get_entity_kind ())
15722 {
15723 default:
15724 break;
15725
15726 case depset::EK_NAMESPACE:
15727 if (!d->is_import () && d->get_entity () != global_namespace)
15728 {
15729 gcc_checking_assert (d->cluster == current);
15730 current++;
15731 sec.u (0);
15732 }
15733 break;
15734
15735 case depset::EK_DECL:
15736 case depset::EK_SPECIALIZATION:
15737 case depset::EK_PARTIAL:
15738 gcc_checking_assert (!d->is_unreached ()
15739 && !d->is_import ()
15740 && d->cluster == current
15741 && d->section);
15742 current++;
15743 sec.u (d->section);
15744 break;
15745 }
15746 }
15747 gcc_assert (count == current);
15748 sec.end (to, to->name (MOD_SNAME_PFX ".ent"), crc_p);
15749 dump.outdent ();
15750 }
15751
15752 bool
15753 module_state::read_entities (unsigned count, unsigned lwm, unsigned hwm)
15754 {
15755 trees_in sec (this);
15756
15757 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".ent"))
15758 return false;
15759
15760 dump () && dump ("Reading entities");
15761 dump.indent ();
15762
15763 for (binding_slot *slot = entity_ary->begin () + entity_lwm; count--; slot++)
15764 {
15765 unsigned snum = sec.u ();
15766 if (snum && (snum - lwm) >= (hwm - lwm))
15767 sec.set_overrun ();
15768 if (sec.get_overrun ())
15769 break;
15770
15771 if (snum)
15772 slot->set_lazy (snum << 2);
15773 }
15774
15775 dump.outdent ();
15776 if (!sec.end (from ()))
15777 return false;
15778 return true;
15779 }
15780
15781 /* Write the pending table to MOD_SNAME_PFX.pnd
15782
15783 The pending table holds information about clusters that need to be
15784 loaded because they contain information about something that is not
15785 found by namespace-scope lookup.
15786
15787 The three cases are:
15788
15789 (a) Template (maybe-partial) specializations that we have
15790 instantiated or defined. When an importer needs to instantiate
15791 that template, they /must have/ the partial, explicit & extern
15792 specializations available. If they have the other specializations
15793 available, they'll have less work to do. Thus, when we're about to
15794 instantiate FOO, we have to be able to ask 'are there any
15795 specialization of FOO in our imports?'.
15796
15797 (b) (Maybe-implicit) member functions definitions. A class could
15798 be defined in one header, and an inline member defined in a
15799 different header (this occurs in the STL). Similarly, like the
15800 specialization case, an implicit member function could have been
15801 'instantiated' in one module, and it'd be nice to not have to
15802 reinstantiate it in another.
15803
15804 (c) A member classes completed elsewhere. A member class could be
15805 declared in one header and defined in another. We need to know to
15806 load the class definition before looking in it. This turns out to
15807 be a specific case of #b, so we can treat these the same. But it
15808 does highlight an issue -- there could be an intermediate import
15809 between the outermost containing namespace-scope class and the
15810 innermost being-defined member class. This is actually possible
15811 with all of these cases, so be aware -- we're not just talking of
15812 one level of import to get to the innermost namespace.
15813
15814 This gets complicated fast, it took me multiple attempts to even
15815 get something remotely working. Partially because I focussed on
15816 optimizing what I think turns out to be a smaller problem, given
15817 the known need to do the more general case *anyway*. I document
15818 the smaller problem, because it does appear to be the natural way
15819 to do it. It's trap!
15820
15821 **** THE TRAP
15822
15823 Let's refer to the primary template or the containing class as the
15824 KEY. And the specialization or member as the PENDING-ENTITY. (To
15825 avoid having to say those mouthfuls all the time.)
15826
15827 In either case, we have an entity and we need some way of mapping
15828 that to a set of entities that need to be loaded before we can
15829 proceed with whatever processing of the entity we were going to do.
15830
15831 We need to link the key to the pending-entity in some way. Given a
15832 key, tell me the pending-entities I need to have loaded. However
15833 we tie the key to the pending-entity must not rely on the key being
15834 loaded -- that'd defeat the lazy loading scheme.
15835
15836 As the key will be an import in we know its entity number (either
15837 because we imported it, or we're writing it out too). Thus we can
15838 generate a map of key-indices to pending-entities. The
15839 pending-entity indices will be into our span of the entity table,
15840 and thus allow them to be lazily loaded. The key index will be
15841 into another slot of the entity table. Notice that this checking
15842 could be expensive, we don't want to iterate over a bunch of
15843 pending-entity indices (across multiple imports), every time we're
15844 about do to the thing with the key. We need to quickly determine
15845 'definitely nothing needed'.
15846
15847 That's almost good enough, except that key indices are not unique
15848 in a couple of cases :( Specifically the Global Module or a module
15849 partition can result in multiple modules assigning an entity index
15850 for the key. The decl-merging on loading will detect that so we
15851 only have one Key loaded, and in the entity hash it'll indicate the
15852 entity index of first load. Which might be different to how we
15853 know it. Notice this is restricted to GM entities or this-module
15854 entities. Foreign imports cannot have this.
15855
15856 We can simply resolve this in the direction of how this module
15857 referred to the key to how the importer knows it. Look in the
15858 entity table slot that we nominate, maybe lazy load it, and then
15859 lookup the resultant entity in the entity hash to learn how the
15860 importer knows it.
15861
15862 But we need to go in the other direction :( Given the key, find all
15863 the index-aliases of that key. We can partially solve that by
15864 adding an alias hash table. Whenever we load a merged decl, add or
15865 augment a mapping from the entity (or its entity-index) to the
15866 newly-discovered index. Then when we look for pending entities of
15867 a key, we also iterate over this aliases this mapping provides.
15868
15869 But that requires the alias to be loaded. And that's not
15870 necessarily true.
15871
15872 *** THE SIMPLER WAY
15873
15874 The remaining fixed thing we have is the innermost namespace
15875 containing the ultimate namespace-scope container of the key and
15876 the name of that container (which might be the key itself). I.e. a
15877 namespace-decl/identifier/module tuple. Let's call this the
15878 top-key. We'll discover that the module is not important here,
15879 because of cross-module possibilities mentioned in case #c above.
15880 We can't markup namespace-binding slots. The best we can do is
15881 mark the binding vector with 'there's something here', and have
15882 another map from namespace/identifier pairs to a vector of pending
15883 entity indices.
15884
15885 Maintain a pending-entity map. This is keyed by top-key, and
15886 maps to a vector of pending-entity indices. On the binding vector
15887 have flags saying whether the pending-name-entity map has contents.
15888 (We might want to further extend the key to be GM-vs-Partition and
15889 specialization-vs-member, but let's not get ahead of ourselves.)
15890
15891 For every key-like entity, find the outermost namespace-scope
15892 name. Use that to lookup in the pending-entity map and then make
15893 sure the specified entities are loaded.
15894
15895 An optimization might be to have a flag in each key-entity saying
15896 that its top key might be in the entity table. It's not clear to
15897 me how to set that flag cheaply -- cheaper than just looking.
15898
15899 FIXME: It'd be nice to have a bit in decls to tell us whether to
15900 even try this. We can have a 'already done' flag, that we set when
15901 we've done KLASS's lazy pendings. When we import a module that
15902 registers pendings on the same top-key as KLASS we need to clear
15903 the flag. A recursive walk of the top-key clearing the bit will
15904 suffice. Plus we only need to recurse on classes that have the bit
15905 set. (That means we need to set the bit on parents of KLASS here,
15906 don't forget.) However, first: correctness, second: efficiency. */
15907
15908 unsigned
15909 module_state::write_pendings (elf_out *to, vec<depset *> depsets,
15910 depset::hash &table, unsigned *crc_p)
15911 {
15912 dump () && dump ("Writing pending-entities");
15913 dump.indent ();
15914
15915 trees_out sec (to, this, table);
15916 sec.begin ();
15917
15918 unsigned count = 0;
15919 tree cache_ns = NULL_TREE;
15920 tree cache_id = NULL_TREE;
15921 unsigned cache_section = ~0;
15922 for (unsigned ix = 0; ix < depsets.length (); ix++)
15923 {
15924 depset *d = depsets[ix];
15925
15926 if (d->is_binding ())
15927 continue;
15928
15929 if (d->is_import ())
15930 continue;
15931
15932 if (!(d->get_entity_kind () == depset::EK_SPECIALIZATION
15933 || d->get_entity_kind () == depset::EK_PARTIAL
15934 || (d->get_entity_kind () == depset::EK_DECL && d->is_member ())))
15935 continue;
15936
15937 tree key_decl = nullptr;
15938 tree key_ns = find_pending_key (d->get_entity (), &key_decl);
15939 tree key_name = DECL_NAME (key_decl);
15940
15941 if (IDENTIFIER_ANON_P (key_name))
15942 {
15943 gcc_checking_assert (IDENTIFIER_LAMBDA_P (key_name));
15944 if (tree attached = LAMBDA_TYPE_EXTRA_SCOPE (TREE_TYPE (key_decl)))
15945 key_name = DECL_NAME (attached);
15946 else
15947 {
15948 /* There's nothing to attach it to. Must
15949 always reinstantiate. */
15950 dump ()
15951 && dump ("Unattached lambda %N[%u] section:%u",
15952 d->get_entity_kind () == depset::EK_DECL
15953 ? "Member" : "Specialization", d->get_entity (),
15954 d->cluster, d->section);
15955 continue;
15956 }
15957 }
15958
15959 char const *also = "";
15960 if (d->section == cache_section
15961 && key_ns == cache_ns
15962 && key_name == cache_id)
15963 /* Same section & key as previous, no need to repeat ourselves. */
15964 also = "also ";
15965 else
15966 {
15967 cache_ns = key_ns;
15968 cache_id = key_name;
15969 cache_section = d->section;
15970 gcc_checking_assert (table.find_dependency (cache_ns));
15971 sec.tree_node (cache_ns);
15972 sec.tree_node (cache_id);
15973 sec.u (d->cluster);
15974 count++;
15975 }
15976 dump () && dump ("Pending %s %N entity:%u section:%u %skeyed to %P",
15977 d->get_entity_kind () == depset::EK_DECL
15978 ? "member" : "specialization", d->get_entity (),
15979 d->cluster, cache_section, also, cache_ns, cache_id);
15980 }
15981 sec.end (to, to->name (MOD_SNAME_PFX ".pnd"), crc_p);
15982 dump.outdent ();
15983
15984 return count;
15985 }
15986
15987 bool
15988 module_state::read_pendings (unsigned count)
15989 {
15990 trees_in sec (this);
15991
15992 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".pnd"))
15993 return false;
15994
15995 dump () && dump ("Reading %u pendings", count);
15996 dump.indent ();
15997
15998 for (unsigned ix = 0; ix != count; ix++)
15999 {
16000 pending_key key;
16001 unsigned index;
16002
16003 key.ns = sec.tree_node ();
16004 key.id = sec.tree_node ();
16005 index = sec.u ();
16006
16007 if (!key.ns || !key.id
16008 || !(TREE_CODE (key.ns) == NAMESPACE_DECL
16009 && !DECL_NAMESPACE_ALIAS (key.ns))
16010 || !identifier_p (key.id)
16011 || index >= entity_num)
16012 sec.set_overrun ();
16013
16014 if (sec.get_overrun ())
16015 break;
16016
16017 dump () && dump ("Pending:%u keyed to %P", index, key.ns, key.id);
16018
16019 index += entity_lwm;
16020 auto &vec = pending_table->get_or_insert (key);
16021 vec.safe_push (index);
16022 }
16023
16024 dump.outdent ();
16025 if (!sec.end (from ()))
16026 return false;
16027 return true;
16028 }
16029
16030 /* Read & write locations. */
16031 enum loc_kind {
16032 LK_ORDINARY,
16033 LK_MACRO,
16034 LK_IMPORT_ORDINARY,
16035 LK_IMPORT_MACRO,
16036 LK_ADHOC,
16037 LK_RESERVED,
16038 };
16039
16040 static const module_state *
16041 module_for_ordinary_loc (location_t loc)
16042 {
16043 unsigned pos = 0;
16044 unsigned len = ool->length () - pos;
16045
16046 while (len)
16047 {
16048 unsigned half = len / 2;
16049 module_state *probe = (*ool)[pos + half];
16050 if (loc < probe->ordinary_locs.first)
16051 len = half;
16052 else if (loc < probe->ordinary_locs.first + probe->ordinary_locs.second)
16053 return probe;
16054 else
16055 {
16056 pos += half + 1;
16057 len = len - (half + 1);
16058 }
16059 }
16060
16061 return nullptr;
16062 }
16063
16064 static const module_state *
16065 module_for_macro_loc (location_t loc)
16066 {
16067 unsigned pos = 1;
16068 unsigned len = modules->length () - pos;
16069
16070 while (len)
16071 {
16072 unsigned half = len / 2;
16073 module_state *probe = (*modules)[pos + half];
16074 if (loc < probe->macro_locs.first)
16075 {
16076 pos += half + 1;
16077 len = len - (half + 1);
16078 }
16079 else if (loc >= probe->macro_locs.first + probe->macro_locs.second)
16080 len = half;
16081 else
16082 return probe;
16083 }
16084
16085 return NULL;
16086 }
16087
16088 location_t
16089 module_state::imported_from () const
16090 {
16091 location_t from = loc;
16092 line_map_ordinary const *fmap
16093 = linemap_check_ordinary (linemap_lookup (line_table, from));
16094
16095 if (MAP_MODULE_P (fmap))
16096 from = linemap_included_from (fmap);
16097
16098 return from;
16099 }
16100
16101 /* Note that LOC will need writing. This allows us to prune locations
16102 that are not needed. */
16103
16104 bool
16105 module_state::note_location (location_t loc)
16106 {
16107 bool added = false;
16108 if (!macro_loc_table && !ord_loc_table)
16109 ;
16110 else if (loc < RESERVED_LOCATION_COUNT)
16111 ;
16112 else if (IS_ADHOC_LOC (loc))
16113 {
16114 location_t locus = get_location_from_adhoc_loc (line_table, loc);
16115 note_location (locus);
16116 source_range range = get_range_from_loc (line_table, loc);
16117 if (range.m_start != locus)
16118 note_location (range.m_start);
16119 note_location (range.m_finish);
16120 }
16121 else if (loc >= LINEMAPS_MACRO_LOWEST_LOCATION (line_table))
16122 {
16123 if (spans.macro (loc))
16124 {
16125 const line_map *map = linemap_lookup (line_table, loc);
16126 const line_map_macro *mac_map = linemap_check_macro (map);
16127 hashval_t hv = macro_loc_traits::hash (mac_map);
16128 macro_loc_info *slot
16129 = macro_loc_table->find_slot_with_hash (mac_map, hv, INSERT);
16130 if (!slot->src)
16131 {
16132 slot->src = mac_map;
16133 slot->remap = 0;
16134 // Expansion locations could themselves be from a
16135 // macro, we need to note them all.
16136 note_location (mac_map->m_expansion);
16137 gcc_checking_assert (mac_map->n_tokens);
16138 location_t tloc = UNKNOWN_LOCATION;
16139 for (unsigned ix = mac_map->n_tokens * 2; ix--;)
16140 if (mac_map->macro_locations[ix] != tloc)
16141 {
16142 tloc = mac_map->macro_locations[ix];
16143 note_location (tloc);
16144 }
16145 added = true;
16146 }
16147 }
16148 }
16149 else if (IS_ORDINARY_LOC (loc))
16150 {
16151 if (spans.ordinary (loc))
16152 {
16153 const line_map *map = linemap_lookup (line_table, loc);
16154 const line_map_ordinary *ord_map = linemap_check_ordinary (map);
16155 ord_loc_info lkup;
16156 lkup.src = ord_map;
16157 lkup.span = 1 << ord_map->m_column_and_range_bits;
16158 lkup.offset = (loc - MAP_START_LOCATION (ord_map)) & ~(lkup.span - 1);
16159 lkup.remap = 0;
16160 ord_loc_info *slot = (ord_loc_table->find_slot_with_hash
16161 (lkup, ord_loc_traits::hash (lkup), INSERT));
16162 if (!slot->src)
16163 {
16164 *slot = lkup;
16165 added = true;
16166 }
16167 }
16168 }
16169 else
16170 gcc_unreachable ();
16171 return added;
16172 }
16173
16174 /* If we're not streaming, record that we need location LOC.
16175 Otherwise stream it. */
16176
16177 void
16178 module_state::write_location (bytes_out &sec, location_t loc)
16179 {
16180 if (!sec.streaming_p ())
16181 {
16182 note_location (loc);
16183 return;
16184 }
16185
16186 if (loc < RESERVED_LOCATION_COUNT)
16187 {
16188 dump (dumper::LOCATION) && dump ("Reserved location %u", unsigned (loc));
16189 sec.u (LK_RESERVED + loc);
16190 }
16191 else if (IS_ADHOC_LOC (loc))
16192 {
16193 dump (dumper::LOCATION) && dump ("Adhoc location");
16194 sec.u (LK_ADHOC);
16195 location_t locus = get_location_from_adhoc_loc (line_table, loc);
16196 write_location (sec, locus);
16197 source_range range = get_range_from_loc (line_table, loc);
16198 if (range.m_start == locus)
16199 /* Compress. */
16200 range.m_start = UNKNOWN_LOCATION;
16201 write_location (sec, range.m_start);
16202 write_location (sec, range.m_finish);
16203 unsigned discriminator = get_discriminator_from_adhoc_loc (line_table, loc);
16204 sec.u (discriminator);
16205 }
16206 else if (loc >= LINEMAPS_MACRO_LOWEST_LOCATION (line_table))
16207 {
16208 const macro_loc_info *info = nullptr;
16209 unsigned offset = 0;
16210 if (unsigned hwm = macro_loc_remap->length ())
16211 {
16212 info = macro_loc_remap->begin ();
16213 while (hwm != 1)
16214 {
16215 unsigned mid = hwm / 2;
16216 if (MAP_START_LOCATION (info[mid].src) <= loc)
16217 {
16218 info += mid;
16219 hwm -= mid;
16220 }
16221 else
16222 hwm = mid;
16223 }
16224 offset = loc - MAP_START_LOCATION (info->src);
16225 if (offset > info->src->n_tokens)
16226 info = nullptr;
16227 }
16228
16229 gcc_checking_assert (bool (info) == bool (spans.macro (loc)));
16230
16231 if (info)
16232 {
16233 offset += info->remap;
16234 sec.u (LK_MACRO);
16235 sec.u (offset);
16236 dump (dumper::LOCATION)
16237 && dump ("Macro location %u output %u", loc, offset);
16238 }
16239 else if (const module_state *import = module_for_macro_loc (loc))
16240 {
16241 unsigned off = loc - import->macro_locs.first;
16242 sec.u (LK_IMPORT_MACRO);
16243 sec.u (import->remap);
16244 sec.u (off);
16245 dump (dumper::LOCATION)
16246 && dump ("Imported macro location %u output %u:%u",
16247 loc, import->remap, off);
16248 }
16249 else
16250 gcc_unreachable ();
16251 }
16252 else if (IS_ORDINARY_LOC (loc))
16253 {
16254 const ord_loc_info *info = nullptr;
16255 unsigned offset = 0;
16256 if (unsigned hwm = ord_loc_remap->length ())
16257 {
16258 info = ord_loc_remap->begin ();
16259 while (hwm != 1)
16260 {
16261 unsigned mid = hwm / 2;
16262 if (MAP_START_LOCATION (info[mid].src) + info[mid].offset <= loc)
16263 {
16264 info += mid;
16265 hwm -= mid;
16266 }
16267 else
16268 hwm = mid;
16269 }
16270 offset = loc - MAP_START_LOCATION (info->src) - info->offset;
16271 if (offset > info->span)
16272 info = nullptr;
16273 }
16274
16275 gcc_checking_assert (bool (info) == bool (spans.ordinary (loc)));
16276
16277 if (info)
16278 {
16279 offset += info->remap;
16280 sec.u (LK_ORDINARY);
16281 sec.u (offset);
16282
16283 dump (dumper::LOCATION)
16284 && dump ("Ordinary location %u output %u", loc, offset);
16285 }
16286 else if (const module_state *import = module_for_ordinary_loc (loc))
16287 {
16288 unsigned off = loc - import->ordinary_locs.first;
16289 sec.u (LK_IMPORT_ORDINARY);
16290 sec.u (import->remap);
16291 sec.u (off);
16292 dump (dumper::LOCATION)
16293 && dump ("Imported ordinary location %u output %u:%u",
16294 import->remap, import->remap, off);
16295 }
16296 else
16297 gcc_unreachable ();
16298 }
16299 else
16300 gcc_unreachable ();
16301 }
16302
16303 location_t
16304 module_state::read_location (bytes_in &sec) const
16305 {
16306 location_t locus = UNKNOWN_LOCATION;
16307 unsigned kind = sec.u ();
16308 switch (kind)
16309 {
16310 default:
16311 {
16312 if (kind < LK_RESERVED + RESERVED_LOCATION_COUNT)
16313 locus = location_t (kind - LK_RESERVED);
16314 else
16315 sec.set_overrun ();
16316 dump (dumper::LOCATION)
16317 && dump ("Reserved location %u", unsigned (locus));
16318 }
16319 break;
16320
16321 case LK_ADHOC:
16322 {
16323 dump (dumper::LOCATION) && dump ("Adhoc location");
16324 locus = read_location (sec);
16325 source_range range;
16326 range.m_start = read_location (sec);
16327 if (range.m_start == UNKNOWN_LOCATION)
16328 range.m_start = locus;
16329 range.m_finish = read_location (sec);
16330 unsigned discriminator = sec.u ();
16331 if (locus != loc && range.m_start != loc && range.m_finish != loc)
16332 locus = line_table->get_or_create_combined_loc (locus, range,
16333 nullptr, discriminator);
16334 }
16335 break;
16336
16337 case LK_MACRO:
16338 {
16339 unsigned off = sec.u ();
16340
16341 if (macro_locs.second)
16342 {
16343 if (off < macro_locs.second)
16344 locus = off + macro_locs.first;
16345 else
16346 sec.set_overrun ();
16347 }
16348 else
16349 locus = loc;
16350 dump (dumper::LOCATION)
16351 && dump ("Macro %u becoming %u", off, locus);
16352 }
16353 break;
16354
16355 case LK_ORDINARY:
16356 {
16357 unsigned off = sec.u ();
16358 if (ordinary_locs.second)
16359 {
16360 if (off < ordinary_locs.second)
16361 locus = off + ordinary_locs.first;
16362 else
16363 sec.set_overrun ();
16364 }
16365 else
16366 locus = loc;
16367
16368 dump (dumper::LOCATION)
16369 && dump ("Ordinary location %u becoming %u", off, locus);
16370 }
16371 break;
16372
16373 case LK_IMPORT_MACRO:
16374 case LK_IMPORT_ORDINARY:
16375 {
16376 unsigned mod = sec.u ();
16377 unsigned off = sec.u ();
16378 const module_state *import = NULL;
16379
16380 if (!mod && !slurp->remap)
16381 /* This is an early read of a partition location during the
16382 read of our ordinary location map. */
16383 import = this;
16384 else
16385 {
16386 mod = slurp->remap_module (mod);
16387 if (!mod)
16388 sec.set_overrun ();
16389 else
16390 import = (*modules)[mod];
16391 }
16392
16393 if (import)
16394 {
16395 if (kind == LK_IMPORT_MACRO)
16396 {
16397 if (!import->macro_locs.second)
16398 locus = import->loc;
16399 else if (off < import->macro_locs.second)
16400 locus = off + import->macro_locs.first;
16401 else
16402 sec.set_overrun ();
16403 }
16404 else
16405 {
16406 if (!import->ordinary_locs.second)
16407 locus = import->loc;
16408 else if (off < import->ordinary_locs.second)
16409 locus = import->ordinary_locs.first + off;
16410 else
16411 sec.set_overrun ();
16412 }
16413 }
16414 }
16415 break;
16416 }
16417
16418 return locus;
16419 }
16420
16421 /* Allocate hash tables to record needed locations. */
16422
16423 void
16424 module_state::write_init_maps ()
16425 {
16426 macro_loc_table = new hash_table<macro_loc_traits> (EXPERIMENT (1, 400));
16427 ord_loc_table = new hash_table<ord_loc_traits> (EXPERIMENT (1, 400));
16428 }
16429
16430 /* Prepare the span adjustments. We prune unneeded locations -- at
16431 this point every needed location must have been seen by
16432 note_location. */
16433
16434 range_t
16435 module_state::write_prepare_maps (module_state_config *cfg, bool has_partitions)
16436 {
16437 dump () && dump ("Preparing locations");
16438 dump.indent ();
16439
16440 dump () && dump ("Reserved locations [%u,%u) macro [%u,%u)",
16441 spans[loc_spans::SPAN_RESERVED].ordinary.first,
16442 spans[loc_spans::SPAN_RESERVED].ordinary.second,
16443 spans[loc_spans::SPAN_RESERVED].macro.first,
16444 spans[loc_spans::SPAN_RESERVED].macro.second);
16445
16446 range_t info {0, 0};
16447
16448 // Sort the noted lines.
16449 vec_alloc (ord_loc_remap, ord_loc_table->size ());
16450 for (auto iter = ord_loc_table->begin (), end = ord_loc_table->end ();
16451 iter != end; ++iter)
16452 ord_loc_remap->quick_push (*iter);
16453 ord_loc_remap->qsort (&ord_loc_info::compare);
16454
16455 // Note included-from maps.
16456 bool added = false;
16457 const line_map_ordinary *current = nullptr;
16458 for (auto iter = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16459 iter != end; ++iter)
16460 if (iter->src != current)
16461 {
16462 current = iter->src;
16463 for (auto probe = current;
16464 auto from = linemap_included_from (probe);
16465 probe = linemap_check_ordinary (linemap_lookup (line_table, from)))
16466 {
16467 if (has_partitions)
16468 {
16469 // Partition locations need to elide their module map
16470 // entry.
16471 probe
16472 = linemap_check_ordinary (linemap_lookup (line_table, from));
16473 if (MAP_MODULE_P (probe))
16474 from = linemap_included_from (probe);
16475 }
16476
16477 if (!note_location (from))
16478 break;
16479 added = true;
16480 }
16481 }
16482 if (added)
16483 {
16484 // Reconstruct the line array as we added items to the hash table.
16485 vec_free (ord_loc_remap);
16486 vec_alloc (ord_loc_remap, ord_loc_table->size ());
16487 for (auto iter = ord_loc_table->begin (), end = ord_loc_table->end ();
16488 iter != end; ++iter)
16489 ord_loc_remap->quick_push (*iter);
16490 ord_loc_remap->qsort (&ord_loc_info::compare);
16491 }
16492 delete ord_loc_table;
16493 ord_loc_table = nullptr;
16494
16495 // Merge (sufficiently) adjacent spans, and calculate remapping.
16496 constexpr unsigned adjacency = 2; // Allow 2 missing lines.
16497 auto begin = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16498 auto dst = begin;
16499 unsigned offset = 0, range_bits = 0;
16500 ord_loc_info *base = nullptr;
16501 for (auto iter = begin; iter != end; ++iter)
16502 {
16503 if (base && iter->src == base->src)
16504 {
16505 if (base->offset + base->span +
16506 ((adjacency << base->src->m_column_and_range_bits)
16507 // If there are few c&r bits, allow further separation.
16508 | (adjacency << 4))
16509 >= iter->offset)
16510 {
16511 // Merge.
16512 offset -= base->span;
16513 base->span = iter->offset + iter->span - base->offset;
16514 offset += base->span;
16515 continue;
16516 }
16517 }
16518 else if (range_bits < iter->src->m_range_bits)
16519 range_bits = iter->src->m_range_bits;
16520
16521 offset += ((1u << iter->src->m_range_bits) - 1);
16522 offset &= ~((1u << iter->src->m_range_bits) - 1);
16523 iter->remap = offset;
16524 offset += iter->span;
16525 base = dst;
16526 *dst++ = *iter;
16527 }
16528 ord_loc_remap->truncate (dst - begin);
16529
16530 info.first = ord_loc_remap->length ();
16531 cfg->ordinary_locs = offset;
16532 cfg->loc_range_bits = range_bits;
16533 dump () && dump ("Ordinary maps:%u locs:%u range_bits:%u",
16534 info.first, cfg->ordinary_locs,
16535 cfg->loc_range_bits);
16536
16537 // Remap the macro locations.
16538 vec_alloc (macro_loc_remap, macro_loc_table->size ());
16539 for (auto iter = macro_loc_table->begin (), end = macro_loc_table->end ();
16540 iter != end; ++iter)
16541 macro_loc_remap->quick_push (*iter);
16542 delete macro_loc_table;
16543 macro_loc_table = nullptr;
16544
16545 macro_loc_remap->qsort (&macro_loc_info::compare);
16546 offset = 0;
16547 for (auto iter = macro_loc_remap->begin (), end = macro_loc_remap->end ();
16548 iter != end; ++iter)
16549 {
16550 auto mac = iter->src;
16551 iter->remap = offset;
16552 offset += mac->n_tokens;
16553 }
16554 info.second = macro_loc_remap->length ();
16555 cfg->macro_locs = offset;
16556
16557 dump () && dump ("Macro maps:%u locs:%u", info.second, cfg->macro_locs);
16558
16559 dump.outdent ();
16560
16561 // If we have no ordinary locs, we must also have no macro locs.
16562 gcc_checking_assert (cfg->ordinary_locs || !cfg->macro_locs);
16563
16564 return info;
16565 }
16566
16567 bool
16568 module_state::read_prepare_maps (const module_state_config *cfg)
16569 {
16570 location_t ordinary = line_table->highest_location + 1;
16571 ordinary += cfg->ordinary_locs;
16572
16573 location_t macro = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
16574 macro -= cfg->macro_locs;
16575
16576 if (ordinary < LINE_MAP_MAX_LOCATION_WITH_COLS
16577 && macro >= LINE_MAP_MAX_LOCATION)
16578 /* OK, we have enough locations. */
16579 return true;
16580
16581 ordinary_locs.first = ordinary_locs.second = 0;
16582 macro_locs.first = macro_locs.second = 0;
16583
16584 static bool informed = false;
16585 if (!informed)
16586 {
16587 /* Just give the notice once. */
16588 informed = true;
16589 inform (loc, "unable to represent further imported source locations");
16590 }
16591
16592 return false;
16593 }
16594
16595 /* Write & read the location maps. Not called if there are no
16596 locations. */
16597
16598 void
16599 module_state::write_ordinary_maps (elf_out *to, range_t &info,
16600 bool has_partitions, unsigned *crc_p)
16601 {
16602 dump () && dump ("Writing ordinary location maps");
16603 dump.indent ();
16604
16605 vec<const char *> filenames;
16606 filenames.create (20);
16607
16608 /* Determine the unique filenames. */
16609 const line_map_ordinary *current = nullptr;
16610 for (auto iter = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16611 iter != end; ++iter)
16612 if (iter->src != current)
16613 {
16614 current = iter->src;
16615 const char *fname = ORDINARY_MAP_FILE_NAME (iter->src);
16616
16617 /* We should never find a module linemap in an interval. */
16618 gcc_checking_assert (!MAP_MODULE_P (iter->src));
16619
16620 /* We expect very few filenames, so just an array.
16621 (Not true when headers are still in play :() */
16622 for (unsigned jx = filenames.length (); jx--;)
16623 {
16624 const char *name = filenames[jx];
16625 if (0 == strcmp (name, fname))
16626 {
16627 /* Reset the linemap's name, because for things like
16628 preprocessed input we could have multiple instances
16629 of the same name, and we'd rather not percolate
16630 that. */
16631 const_cast<line_map_ordinary *> (iter->src)->to_file = name;
16632 fname = NULL;
16633 break;
16634 }
16635 }
16636 if (fname)
16637 filenames.safe_push (fname);
16638 }
16639
16640 bytes_out sec (to);
16641 sec.begin ();
16642
16643 /* Write the filenames. */
16644 unsigned len = filenames.length ();
16645 sec.u (len);
16646 dump () && dump ("%u source file names", len);
16647 for (unsigned ix = 0; ix != len; ix++)
16648 {
16649 const char *fname = filenames[ix];
16650 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16651 sec.str (fname);
16652 }
16653
16654 sec.u (info.first); /* Num maps. */
16655 const ord_loc_info *base = nullptr;
16656 for (auto iter = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16657 iter != end; ++iter)
16658 {
16659 dump (dumper::LOCATION)
16660 && dump ("Span:%u ordinary [%u+%u,+%u)->[%u,+%u)",
16661 iter - ord_loc_remap->begin (),
16662 MAP_START_LOCATION (iter->src), iter->offset, iter->span,
16663 iter->remap, iter->span);
16664
16665 if (!base || iter->src != base->src)
16666 base = iter;
16667 sec.u (iter->offset - base->offset);
16668 if (base == iter)
16669 {
16670 sec.u (iter->src->sysp);
16671 sec.u (iter->src->m_range_bits);
16672 sec.u (iter->src->m_column_and_range_bits - iter->src->m_range_bits);
16673
16674 const char *fname = ORDINARY_MAP_FILE_NAME (iter->src);
16675 for (unsigned ix = 0; ix != filenames.length (); ix++)
16676 if (filenames[ix] == fname)
16677 {
16678 sec.u (ix);
16679 break;
16680 }
16681 unsigned line = ORDINARY_MAP_STARTING_LINE_NUMBER (iter->src);
16682 line += iter->offset >> iter->src->m_column_and_range_bits;
16683 sec.u (line);
16684 }
16685 sec.u (iter->remap);
16686 if (base == iter)
16687 {
16688 /* Write the included from location, which means reading it
16689 while reading in the ordinary maps. So we'd better not
16690 be getting ahead of ourselves. */
16691 location_t from = linemap_included_from (iter->src);
16692 gcc_checking_assert (from < MAP_START_LOCATION (iter->src));
16693 if (from != UNKNOWN_LOCATION && has_partitions)
16694 {
16695 /* A partition's span will have a from pointing at a
16696 MODULE_INC. Find that map's from. */
16697 line_map_ordinary const *fmap
16698 = linemap_check_ordinary (linemap_lookup (line_table, from));
16699 if (MAP_MODULE_P (fmap))
16700 from = linemap_included_from (fmap);
16701 }
16702 write_location (sec, from);
16703 }
16704 }
16705
16706 filenames.release ();
16707
16708 sec.end (to, to->name (MOD_SNAME_PFX ".olm"), crc_p);
16709 dump.outdent ();
16710 }
16711
16712 void
16713 module_state::write_macro_maps (elf_out *to, range_t &info, unsigned *crc_p)
16714 {
16715 dump () && dump ("Writing macro location maps");
16716 dump.indent ();
16717
16718 bytes_out sec (to);
16719 sec.begin ();
16720
16721 dump () && dump ("Macro maps:%u", info.second);
16722 sec.u (info.second);
16723
16724 unsigned macro_num = 0;
16725 for (auto iter = macro_loc_remap->end (), begin = macro_loc_remap->begin ();
16726 iter-- != begin;)
16727 {
16728 auto mac = iter->src;
16729 sec.u (iter->remap);
16730 sec.u (mac->n_tokens);
16731 sec.cpp_node (mac->macro);
16732 write_location (sec, mac->m_expansion);
16733 const location_t *locs = mac->macro_locations;
16734 /* There are lots of identical runs. */
16735 location_t prev = UNKNOWN_LOCATION;
16736 unsigned count = 0;
16737 unsigned runs = 0;
16738 for (unsigned jx = mac->n_tokens * 2; jx--;)
16739 {
16740 location_t tok_loc = locs[jx];
16741 if (tok_loc == prev)
16742 {
16743 count++;
16744 continue;
16745 }
16746 runs++;
16747 sec.u (count);
16748 count = 1;
16749 prev = tok_loc;
16750 write_location (sec, tok_loc);
16751 }
16752 sec.u (count);
16753 dump (dumper::LOCATION)
16754 && dump ("Macro:%u %I %u/%u*2 locations [%u,%u)->%u",
16755 macro_num, identifier (mac->macro),
16756 runs, mac->n_tokens,
16757 MAP_START_LOCATION (mac),
16758 MAP_START_LOCATION (mac) + mac->n_tokens,
16759 iter->remap);
16760 macro_num++;
16761 }
16762 gcc_assert (macro_num == info.second);
16763
16764 sec.end (to, to->name (MOD_SNAME_PFX ".mlm"), crc_p);
16765 dump.outdent ();
16766 }
16767
16768 bool
16769 module_state::read_ordinary_maps (unsigned num_ord_locs, unsigned range_bits)
16770 {
16771 bytes_in sec;
16772
16773 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".olm"))
16774 return false;
16775 dump () && dump ("Reading ordinary location maps");
16776 dump.indent ();
16777
16778 /* Read the filename table. */
16779 unsigned len = sec.u ();
16780 dump () && dump ("%u source file names", len);
16781 vec<const char *> filenames;
16782 filenames.create (len);
16783 for (unsigned ix = 0; ix != len; ix++)
16784 {
16785 size_t l;
16786 const char *buf = sec.str (&l);
16787 char *fname = XNEWVEC (char, l + 1);
16788 memcpy (fname, buf, l + 1);
16789 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16790 /* We leak these names into the line-map table. But it
16791 doesn't own them. */
16792 filenames.quick_push (fname);
16793 }
16794
16795 unsigned num_ordinary = sec.u ();
16796 dump () && dump ("Ordinary maps:%u, range_bits:%u", num_ordinary, range_bits);
16797
16798 location_t offset = line_table->highest_location + 1;
16799 offset += ((1u << range_bits) - 1);
16800 offset &= ~((1u << range_bits) - 1);
16801 ordinary_locs.first = offset;
16802
16803 bool propagated = spans.maybe_propagate (this, offset);
16804 line_map_ordinary *maps = static_cast<line_map_ordinary *>
16805 (line_map_new_raw (line_table, false, num_ordinary));
16806
16807 const line_map_ordinary *base = nullptr;
16808 for (unsigned ix = 0; ix != num_ordinary && !sec.get_overrun (); ix++)
16809 {
16810 line_map_ordinary *map = &maps[ix];
16811
16812 unsigned offset = sec.u ();
16813 if (!offset)
16814 {
16815 map->reason = LC_RENAME;
16816 map->sysp = sec.u ();
16817 map->m_range_bits = sec.u ();
16818 map->m_column_and_range_bits = sec.u () + map->m_range_bits;
16819 unsigned fnum = sec.u ();
16820 map->to_file = (fnum < filenames.length () ? filenames[fnum] : "");
16821 map->to_line = sec.u ();
16822 base = map;
16823 }
16824 else
16825 {
16826 *map = *base;
16827 map->to_line += offset >> map->m_column_and_range_bits;
16828 }
16829 unsigned remap = sec.u ();
16830 map->start_location = remap + ordinary_locs.first;
16831 if (base == map)
16832 {
16833 /* Root the outermost map at our location. */
16834 ordinary_locs.second = remap;
16835 location_t from = read_location (sec);
16836 map->included_from = from != UNKNOWN_LOCATION ? from : loc;
16837 }
16838 }
16839
16840 ordinary_locs.second = num_ord_locs;
16841 /* highest_location is the one handed out, not the next one to
16842 hand out. */
16843 line_table->highest_location = ordinary_locs.first + ordinary_locs.second - 1;
16844
16845 if (line_table->highest_location >= LINE_MAP_MAX_LOCATION_WITH_COLS)
16846 /* We shouldn't run out of locations, as we checked before
16847 starting. */
16848 sec.set_overrun ();
16849 dump () && dump ("Ordinary location [%u,+%u)",
16850 ordinary_locs.first, ordinary_locs.second);
16851
16852 if (propagated)
16853 spans.close ();
16854
16855 filenames.release ();
16856
16857 dump.outdent ();
16858 if (!sec.end (from ()))
16859 return false;
16860
16861 return true;
16862 }
16863
16864 bool
16865 module_state::read_macro_maps (unsigned num_macro_locs)
16866 {
16867 bytes_in sec;
16868
16869 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".mlm"))
16870 return false;
16871 dump () && dump ("Reading macro location maps");
16872 dump.indent ();
16873
16874 unsigned num_macros = sec.u ();
16875 dump () && dump ("Macro maps:%u locs:%u", num_macros, num_macro_locs);
16876
16877 bool propagated = spans.maybe_propagate (this,
16878 line_table->highest_location + 1);
16879
16880 location_t offset = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
16881 macro_locs.second = num_macro_locs;
16882 macro_locs.first = offset - num_macro_locs;
16883
16884 dump () && dump ("Macro loc delta %d", offset);
16885 dump () && dump ("Macro locations [%u,%u)",
16886 macro_locs.first, macro_locs.second);
16887
16888 for (unsigned ix = 0; ix != num_macros && !sec.get_overrun (); ix++)
16889 {
16890 unsigned offset = sec.u ();
16891 unsigned n_tokens = sec.u ();
16892 cpp_hashnode *node = sec.cpp_node ();
16893 location_t exp_loc = read_location (sec);
16894
16895 const line_map_macro *macro
16896 = linemap_enter_macro (line_table, node, exp_loc, n_tokens);
16897 if (!macro)
16898 /* We shouldn't run out of locations, as we checked that we
16899 had enough before starting. */
16900 break;
16901 gcc_checking_assert (MAP_START_LOCATION (macro)
16902 == offset + macro_locs.first);
16903
16904 location_t *locs = macro->macro_locations;
16905 location_t tok_loc = UNKNOWN_LOCATION;
16906 unsigned count = sec.u ();
16907 unsigned runs = 0;
16908 for (unsigned jx = macro->n_tokens * 2; jx-- && !sec.get_overrun ();)
16909 {
16910 while (!count-- && !sec.get_overrun ())
16911 {
16912 runs++;
16913 tok_loc = read_location (sec);
16914 count = sec.u ();
16915 }
16916 locs[jx] = tok_loc;
16917 }
16918 if (count)
16919 sec.set_overrun ();
16920 dump (dumper::LOCATION)
16921 && dump ("Macro:%u %I %u/%u*2 locations [%u,%u)",
16922 ix, identifier (node), runs, n_tokens,
16923 MAP_START_LOCATION (macro),
16924 MAP_START_LOCATION (macro) + n_tokens);
16925 }
16926
16927 dump () && dump ("Macro location lwm:%u", macro_locs.first);
16928 if (propagated)
16929 spans.close ();
16930
16931 dump.outdent ();
16932 if (!sec.end (from ()))
16933 return false;
16934
16935 return true;
16936 }
16937
16938 /* Serialize the definition of MACRO. */
16939
16940 void
16941 module_state::write_define (bytes_out &sec, const cpp_macro *macro)
16942 {
16943 sec.u (macro->count);
16944
16945 bytes_out::bits_out bits = sec.stream_bits ();
16946 bits.b (macro->fun_like);
16947 bits.b (macro->variadic);
16948 bits.b (macro->syshdr);
16949 bits.bflush ();
16950
16951 write_location (sec, macro->line);
16952 if (macro->fun_like)
16953 {
16954 sec.u (macro->paramc);
16955 const cpp_hashnode *const *parms = macro->parm.params;
16956 for (unsigned ix = 0; ix != macro->paramc; ix++)
16957 sec.cpp_node (parms[ix]);
16958 }
16959
16960 unsigned len = 0;
16961 for (unsigned ix = 0; ix != macro->count; ix++)
16962 {
16963 const cpp_token *token = &macro->exp.tokens[ix];
16964 write_location (sec, token->src_loc);
16965 sec.u (token->type);
16966 sec.u (token->flags);
16967 switch (cpp_token_val_index (token))
16968 {
16969 default:
16970 gcc_unreachable ();
16971
16972 case CPP_TOKEN_FLD_ARG_NO:
16973 /* An argument reference. */
16974 sec.u (token->val.macro_arg.arg_no);
16975 sec.cpp_node (token->val.macro_arg.spelling);
16976 break;
16977
16978 case CPP_TOKEN_FLD_NODE:
16979 /* An identifier. */
16980 sec.cpp_node (token->val.node.node);
16981 if (token->val.node.spelling == token->val.node.node)
16982 /* The spelling will usually be the same. so optimize
16983 that. */
16984 sec.str (NULL, 0);
16985 else
16986 sec.cpp_node (token->val.node.spelling);
16987 break;
16988
16989 case CPP_TOKEN_FLD_NONE:
16990 break;
16991
16992 case CPP_TOKEN_FLD_STR:
16993 /* A string, number or comment. Not always NUL terminated,
16994 we stream out in a single contatenation with embedded
16995 NULs as that's a safe default. */
16996 len += token->val.str.len + 1;
16997 sec.u (token->val.str.len);
16998 break;
16999
17000 case CPP_TOKEN_FLD_SOURCE:
17001 case CPP_TOKEN_FLD_TOKEN_NO:
17002 case CPP_TOKEN_FLD_PRAGMA:
17003 /* These do not occur inside a macro itself. */
17004 gcc_unreachable ();
17005 }
17006 }
17007
17008 if (len)
17009 {
17010 char *ptr = reinterpret_cast<char *> (sec.buf (len));
17011 len = 0;
17012 for (unsigned ix = 0; ix != macro->count; ix++)
17013 {
17014 const cpp_token *token = &macro->exp.tokens[ix];
17015 if (cpp_token_val_index (token) == CPP_TOKEN_FLD_STR)
17016 {
17017 memcpy (ptr + len, token->val.str.text,
17018 token->val.str.len);
17019 len += token->val.str.len;
17020 ptr[len++] = 0;
17021 }
17022 }
17023 }
17024 }
17025
17026 /* Read a macro definition. */
17027
17028 cpp_macro *
17029 module_state::read_define (bytes_in &sec, cpp_reader *reader) const
17030 {
17031 unsigned count = sec.u ();
17032 /* We rely on knowing cpp_reader's hash table is ident_hash, and
17033 its subobject allocator is stringpool_ggc_alloc and that is just
17034 a wrapper for ggc_alloc_atomic. */
17035 cpp_macro *macro
17036 = (cpp_macro *)ggc_alloc_atomic (sizeof (cpp_macro)
17037 + sizeof (cpp_token) * (count - !!count));
17038 memset (macro, 0, sizeof (cpp_macro) + sizeof (cpp_token) * (count - !!count));
17039
17040 macro->count = count;
17041 macro->kind = cmk_macro;
17042 macro->imported_p = true;
17043
17044 bytes_in::bits_in bits = sec.stream_bits ();
17045 macro->fun_like = bits.b ();
17046 macro->variadic = bits.b ();
17047 macro->syshdr = bits.b ();
17048 bits.bflush ();
17049
17050 macro->line = read_location (sec);
17051
17052 if (macro->fun_like)
17053 {
17054 unsigned paramc = sec.u ();
17055 cpp_hashnode **params
17056 = (cpp_hashnode **)ggc_alloc_atomic (sizeof (cpp_hashnode *) * paramc);
17057 macro->paramc = paramc;
17058 macro->parm.params = params;
17059 for (unsigned ix = 0; ix != paramc; ix++)
17060 params[ix] = sec.cpp_node ();
17061 }
17062
17063 unsigned len = 0;
17064 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
17065 {
17066 cpp_token *token = &macro->exp.tokens[ix];
17067 token->src_loc = read_location (sec);
17068 token->type = cpp_ttype (sec.u ());
17069 token->flags = sec.u ();
17070 switch (cpp_token_val_index (token))
17071 {
17072 default:
17073 sec.set_overrun ();
17074 break;
17075
17076 case CPP_TOKEN_FLD_ARG_NO:
17077 /* An argument reference. */
17078 {
17079 unsigned arg_no = sec.u ();
17080 if (arg_no - 1 >= macro->paramc)
17081 sec.set_overrun ();
17082 token->val.macro_arg.arg_no = arg_no;
17083 token->val.macro_arg.spelling = sec.cpp_node ();
17084 }
17085 break;
17086
17087 case CPP_TOKEN_FLD_NODE:
17088 /* An identifier. */
17089 token->val.node.node = sec.cpp_node ();
17090 token->val.node.spelling = sec.cpp_node ();
17091 if (!token->val.node.spelling)
17092 token->val.node.spelling = token->val.node.node;
17093 break;
17094
17095 case CPP_TOKEN_FLD_NONE:
17096 break;
17097
17098 case CPP_TOKEN_FLD_STR:
17099 /* A string, number or comment. */
17100 token->val.str.len = sec.u ();
17101 len += token->val.str.len + 1;
17102 break;
17103 }
17104 }
17105
17106 if (len)
17107 if (const char *ptr = reinterpret_cast<const char *> (sec.buf (len)))
17108 {
17109 /* There should be a final NUL. */
17110 if (ptr[len-1])
17111 sec.set_overrun ();
17112 /* cpp_alloc_token_string will add a final NUL. */
17113 const unsigned char *buf
17114 = cpp_alloc_token_string (reader, (const unsigned char *)ptr, len - 1);
17115 len = 0;
17116 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
17117 {
17118 cpp_token *token = &macro->exp.tokens[ix];
17119 if (cpp_token_val_index (token) == CPP_TOKEN_FLD_STR)
17120 {
17121 token->val.str.text = buf + len;
17122 len += token->val.str.len;
17123 if (buf[len++])
17124 sec.set_overrun ();
17125 }
17126 }
17127 }
17128
17129 if (sec.get_overrun ())
17130 return NULL;
17131 return macro;
17132 }
17133
17134 /* Exported macro data. */
17135 struct GTY(()) macro_export {
17136 cpp_macro *def;
17137 location_t undef_loc;
17138
17139 macro_export ()
17140 :def (NULL), undef_loc (UNKNOWN_LOCATION)
17141 {
17142 }
17143 };
17144
17145 /* Imported macro data. */
17146 class macro_import {
17147 public:
17148 struct slot {
17149 #if defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8
17150 int offset;
17151 #endif
17152 /* We need to ensure we don't use the LSB for representation, as
17153 that's the union discriminator below. */
17154 unsigned bits;
17155
17156 #if !(defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8)
17157 int offset;
17158 #endif
17159
17160 public:
17161 enum Layout {
17162 L_DEF = 1,
17163 L_UNDEF = 2,
17164 L_BOTH = 3,
17165 L_MODULE_SHIFT = 2
17166 };
17167
17168 public:
17169 /* Not a regular ctor, because we put it in a union, and that's
17170 not allowed in C++ 98. */
17171 static slot ctor (unsigned module, unsigned defness)
17172 {
17173 gcc_checking_assert (defness);
17174 slot s;
17175 s.bits = defness | (module << L_MODULE_SHIFT);
17176 s.offset = -1;
17177 return s;
17178 }
17179
17180 public:
17181 unsigned get_defness () const
17182 {
17183 return bits & L_BOTH;
17184 }
17185 unsigned get_module () const
17186 {
17187 return bits >> L_MODULE_SHIFT;
17188 }
17189 void become_undef ()
17190 {
17191 bits &= ~unsigned (L_DEF);
17192 bits |= unsigned (L_UNDEF);
17193 }
17194 };
17195
17196 private:
17197 typedef vec<slot, va_heap, vl_embed> ary_t;
17198 union either {
17199 /* Discriminated by bits 0|1 != 0. The expected case is that
17200 there will be exactly one slot per macro, hence the effort of
17201 packing that. */
17202 ary_t *ary;
17203 slot single;
17204 } u;
17205
17206 public:
17207 macro_import ()
17208 {
17209 u.ary = NULL;
17210 }
17211
17212 private:
17213 bool single_p () const
17214 {
17215 return u.single.bits & slot::L_BOTH;
17216 }
17217 bool occupied_p () const
17218 {
17219 return u.ary != NULL;
17220 }
17221
17222 public:
17223 unsigned length () const
17224 {
17225 gcc_checking_assert (occupied_p ());
17226 return single_p () ? 1 : u.ary->length ();
17227 }
17228 slot &operator[] (unsigned ix)
17229 {
17230 gcc_checking_assert (occupied_p ());
17231 if (single_p ())
17232 {
17233 gcc_checking_assert (!ix);
17234 return u.single;
17235 }
17236 else
17237 return (*u.ary)[ix];
17238 }
17239
17240 public:
17241 slot &exported ();
17242 slot &append (unsigned module, unsigned defness);
17243 };
17244
17245 /* O is a new import to append to the list for. If we're an empty
17246 set, initialize us. */
17247
17248 macro_import::slot &
17249 macro_import::append (unsigned module, unsigned defness)
17250 {
17251 if (!occupied_p ())
17252 {
17253 u.single = slot::ctor (module, defness);
17254 return u.single;
17255 }
17256 else
17257 {
17258 bool single = single_p ();
17259 ary_t *m = single ? NULL : u.ary;
17260 vec_safe_reserve (m, 1 + single);
17261 if (single)
17262 m->quick_push (u.single);
17263 u.ary = m;
17264 return *u.ary->quick_push (slot::ctor (module, defness));
17265 }
17266 }
17267
17268 /* We're going to export something. Make sure the first import slot
17269 is us. */
17270
17271 macro_import::slot &
17272 macro_import::exported ()
17273 {
17274 if (occupied_p () && !(*this)[0].get_module ())
17275 {
17276 slot &res = (*this)[0];
17277 res.bits |= slot::L_DEF;
17278 return res;
17279 }
17280
17281 slot *a = &append (0, slot::L_DEF);
17282 if (!single_p ())
17283 {
17284 slot &f = (*this)[0];
17285 std::swap (f, *a);
17286 a = &f;
17287 }
17288 return *a;
17289 }
17290
17291 /* The import (&exported) macros. cpp_hasnode's deferred field
17292 indexes this array (offset by 1, so zero means 'not present'. */
17293
17294 static vec<macro_import, va_heap, vl_embed> *macro_imports;
17295
17296 /* The exported macros. A macro_import slot's zeroth element's offset
17297 indexes this array. If the zeroth slot is not for module zero,
17298 there is no export. */
17299
17300 static GTY(()) vec<macro_export, va_gc> *macro_exports;
17301
17302 /* The reachable set of header imports from this TU. */
17303
17304 static GTY(()) bitmap headers;
17305
17306 /* Get the (possibly empty) macro imports for NODE. */
17307
17308 static macro_import &
17309 get_macro_imports (cpp_hashnode *node)
17310 {
17311 if (node->deferred)
17312 return (*macro_imports)[node->deferred - 1];
17313
17314 vec_safe_reserve (macro_imports, 1);
17315 node->deferred = macro_imports->length () + 1;
17316 return *vec_safe_push (macro_imports, macro_import ());
17317 }
17318
17319 /* Get the macro export for export EXP of NODE. */
17320
17321 static macro_export &
17322 get_macro_export (macro_import::slot &slot)
17323 {
17324 if (slot.offset >= 0)
17325 return (*macro_exports)[slot.offset];
17326
17327 vec_safe_reserve (macro_exports, 1);
17328 slot.offset = macro_exports->length ();
17329 return *macro_exports->quick_push (macro_export ());
17330 }
17331
17332 /* If NODE is an exportable macro, add it to the export set. */
17333
17334 static int
17335 maybe_add_macro (cpp_reader *, cpp_hashnode *node, void *data_)
17336 {
17337 bool exporting = false;
17338
17339 if (cpp_user_macro_p (node))
17340 if (cpp_macro *macro = node->value.macro)
17341 /* Ignore imported, builtins, command line and forced header macros. */
17342 if (!macro->imported_p
17343 && !macro->lazy && macro->line >= spans.main_start ())
17344 {
17345 gcc_checking_assert (macro->kind == cmk_macro);
17346 /* I don't want to deal with this corner case, that I suspect is
17347 a devil's advocate reading of the standard. */
17348 gcc_checking_assert (!macro->extra_tokens);
17349
17350 macro_import::slot &slot = get_macro_imports (node).exported ();
17351 macro_export &exp = get_macro_export (slot);
17352 exp.def = macro;
17353 exporting = true;
17354 }
17355
17356 if (!exporting && node->deferred)
17357 {
17358 macro_import &imports = (*macro_imports)[node->deferred - 1];
17359 macro_import::slot &slot = imports[0];
17360 if (!slot.get_module ())
17361 {
17362 gcc_checking_assert (slot.get_defness ());
17363 exporting = true;
17364 }
17365 }
17366
17367 if (exporting)
17368 static_cast<vec<cpp_hashnode *> *> (data_)->safe_push (node);
17369
17370 return 1; /* Don't stop. */
17371 }
17372
17373 /* Order cpp_hashnodes A_ and B_ by their exported macro locations. */
17374
17375 static int
17376 macro_loc_cmp (const void *a_, const void *b_)
17377 {
17378 const cpp_hashnode *node_a = *(const cpp_hashnode *const *)a_;
17379 macro_import &import_a = (*macro_imports)[node_a->deferred - 1];
17380 const macro_export &export_a = (*macro_exports)[import_a[0].offset];
17381 location_t loc_a = export_a.def ? export_a.def->line : export_a.undef_loc;
17382
17383 const cpp_hashnode *node_b = *(const cpp_hashnode *const *)b_;
17384 macro_import &import_b = (*macro_imports)[node_b->deferred - 1];
17385 const macro_export &export_b = (*macro_exports)[import_b[0].offset];
17386 location_t loc_b = export_b.def ? export_b.def->line : export_b.undef_loc;
17387
17388 if (loc_a < loc_b)
17389 return +1;
17390 else if (loc_a > loc_b)
17391 return -1;
17392 else
17393 return 0;
17394 }
17395
17396 /* Gather the macro definitions and undefinitions that we will need to
17397 write out. */
17398
17399 vec<cpp_hashnode *> *
17400 module_state::prepare_macros (cpp_reader *reader)
17401 {
17402 vec<cpp_hashnode *> *macros;
17403 vec_alloc (macros, 100);
17404
17405 cpp_forall_identifiers (reader, maybe_add_macro, macros);
17406
17407 dump (dumper::MACRO) && dump ("No more than %u macros", macros->length ());
17408
17409 macros->qsort (macro_loc_cmp);
17410
17411 // Note the locations.
17412 for (unsigned ix = macros->length (); ix--;)
17413 {
17414 cpp_hashnode *node = (*macros)[ix];
17415 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
17416 macro_export &mac = (*macro_exports)[slot.offset];
17417
17418 if (IDENTIFIER_KEYWORD_P (identifier (node)))
17419 continue;
17420
17421 if (mac.undef_loc != UNKNOWN_LOCATION)
17422 note_location (mac.undef_loc);
17423 if (mac.def)
17424 {
17425 note_location (mac.def->line);
17426 for (unsigned ix = 0; ix != mac.def->count; ix++)
17427 note_location (mac.def->exp.tokens[ix].src_loc);
17428 }
17429 }
17430
17431 return macros;
17432 }
17433
17434 /* Write out the exported defines. This is two sections, one
17435 containing the definitions, the other a table of node names. */
17436
17437 unsigned
17438 module_state::write_macros (elf_out *to, vec<cpp_hashnode *> *macros,
17439 unsigned *crc_p)
17440 {
17441 dump () && dump ("Writing macros");
17442 dump.indent ();
17443
17444 /* Write the defs */
17445 bytes_out sec (to);
17446 sec.begin ();
17447
17448 unsigned count = 0;
17449 for (unsigned ix = macros->length (); ix--;)
17450 {
17451 cpp_hashnode *node = (*macros)[ix];
17452 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
17453 gcc_assert (!slot.get_module () && slot.get_defness ());
17454
17455 macro_export &mac = (*macro_exports)[slot.offset];
17456 gcc_assert (!!(slot.get_defness () & macro_import::slot::L_UNDEF)
17457 == (mac.undef_loc != UNKNOWN_LOCATION)
17458 && !!(slot.get_defness () & macro_import::slot::L_DEF)
17459 == (mac.def != NULL));
17460
17461 if (IDENTIFIER_KEYWORD_P (identifier (node)))
17462 {
17463 warning_at (mac.def->line, 0,
17464 "not exporting %<#define %E%> as it is a keyword",
17465 identifier (node));
17466 slot.offset = 0;
17467 continue;
17468 }
17469
17470 count++;
17471 slot.offset = sec.pos;
17472 dump (dumper::MACRO)
17473 && dump ("Writing macro %s%s%s %I at %u",
17474 slot.get_defness () & macro_import::slot::L_UNDEF
17475 ? "#undef" : "",
17476 slot.get_defness () == macro_import::slot::L_BOTH
17477 ? " & " : "",
17478 slot.get_defness () & macro_import::slot::L_DEF
17479 ? "#define" : "",
17480 identifier (node), slot.offset);
17481 if (mac.undef_loc != UNKNOWN_LOCATION)
17482 write_location (sec, mac.undef_loc);
17483 if (mac.def)
17484 write_define (sec, mac.def);
17485 }
17486 if (count)
17487 // We may have ended on a tokenless macro with a very short
17488 // location, that will cause problems reading its bit flags.
17489 sec.u (0);
17490 sec.end (to, to->name (MOD_SNAME_PFX ".def"), crc_p);
17491
17492 if (count)
17493 {
17494 /* Write the table. */
17495 bytes_out sec (to);
17496 sec.begin ();
17497 sec.u (count);
17498
17499 for (unsigned ix = macros->length (); ix--;)
17500 {
17501 const cpp_hashnode *node = (*macros)[ix];
17502 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
17503
17504 if (slot.offset)
17505 {
17506 sec.cpp_node (node);
17507 sec.u (slot.get_defness ());
17508 sec.u (slot.offset);
17509 }
17510 }
17511 sec.end (to, to->name (MOD_SNAME_PFX ".mac"), crc_p);
17512 }
17513
17514 dump.outdent ();
17515 return count;
17516 }
17517
17518 bool
17519 module_state::read_macros ()
17520 {
17521 /* Get the def section. */
17522 if (!slurp->macro_defs.begin (loc, from (), MOD_SNAME_PFX ".def"))
17523 return false;
17524
17525 /* Get the tbl section, if there are defs. */
17526 if (slurp->macro_defs.more_p ()
17527 && !slurp->macro_tbl.begin (loc, from (), MOD_SNAME_PFX ".mac"))
17528 return false;
17529
17530 return true;
17531 }
17532
17533 /* Install the macro name table. */
17534
17535 void
17536 module_state::install_macros ()
17537 {
17538 bytes_in &sec = slurp->macro_tbl;
17539 if (!sec.size)
17540 return;
17541
17542 dump () && dump ("Reading macro table %M", this);
17543 dump.indent ();
17544
17545 unsigned count = sec.u ();
17546 dump () && dump ("%u macros", count);
17547 while (count--)
17548 {
17549 cpp_hashnode *node = sec.cpp_node ();
17550 macro_import &imp = get_macro_imports (node);
17551 unsigned flags = sec.u () & macro_import::slot::L_BOTH;
17552 if (!flags)
17553 sec.set_overrun ();
17554
17555 if (sec.get_overrun ())
17556 break;
17557
17558 macro_import::slot &slot = imp.append (mod, flags);
17559 slot.offset = sec.u ();
17560
17561 dump (dumper::MACRO)
17562 && dump ("Read %s macro %s%s%s %I at %u",
17563 imp.length () > 1 ? "add" : "new",
17564 flags & macro_import::slot::L_UNDEF ? "#undef" : "",
17565 flags == macro_import::slot::L_BOTH ? " & " : "",
17566 flags & macro_import::slot::L_DEF ? "#define" : "",
17567 identifier (node), slot.offset);
17568
17569 /* We'll leak an imported definition's TOKEN_FLD_STR's data
17570 here. But that only happens when we've had to resolve the
17571 deferred macro before this import -- why are you doing
17572 that? */
17573 if (cpp_macro *cur = cpp_set_deferred_macro (node))
17574 if (!cur->imported_p)
17575 {
17576 macro_import::slot &slot = imp.exported ();
17577 macro_export &exp = get_macro_export (slot);
17578 exp.def = cur;
17579 dump (dumper::MACRO)
17580 && dump ("Saving current #define %I", identifier (node));
17581 }
17582 }
17583
17584 /* We're now done with the table. */
17585 elf_in::release (slurp->from, sec);
17586
17587 dump.outdent ();
17588 }
17589
17590 /* Import the transitive macros. */
17591
17592 void
17593 module_state::import_macros ()
17594 {
17595 bitmap_ior_into (headers, slurp->headers);
17596
17597 bitmap_iterator bititer;
17598 unsigned bitnum;
17599 EXECUTE_IF_SET_IN_BITMAP (slurp->headers, 0, bitnum, bititer)
17600 (*modules)[bitnum]->install_macros ();
17601 }
17602
17603 /* NODE is being undefined at LOC. Record it in the export table, if
17604 necessary. */
17605
17606 void
17607 module_state::undef_macro (cpp_reader *, location_t loc, cpp_hashnode *node)
17608 {
17609 if (!node->deferred)
17610 /* The macro is not imported, so our undef is irrelevant. */
17611 return;
17612
17613 unsigned n = dump.push (NULL);
17614
17615 macro_import::slot &slot = (*macro_imports)[node->deferred - 1].exported ();
17616 macro_export &exp = get_macro_export (slot);
17617
17618 exp.undef_loc = loc;
17619 slot.become_undef ();
17620 exp.def = NULL;
17621
17622 dump (dumper::MACRO) && dump ("Recording macro #undef %I", identifier (node));
17623
17624 dump.pop (n);
17625 }
17626
17627 /* NODE is a deferred macro node. Determine the definition and return
17628 it, with NULL if undefined. May issue diagnostics.
17629
17630 This can leak memory, when merging declarations -- the string
17631 contents (TOKEN_FLD_STR) of each definition are allocated in
17632 unreclaimable cpp objstack. Only one will win. However, I do not
17633 expect this to be common -- mostly macros have a single point of
17634 definition. Perhaps we could restore the objstack to its position
17635 after the first imported definition (if that wins)? The macros
17636 themselves are GC'd. */
17637
17638 cpp_macro *
17639 module_state::deferred_macro (cpp_reader *reader, location_t loc,
17640 cpp_hashnode *node)
17641 {
17642 macro_import &imports = (*macro_imports)[node->deferred - 1];
17643
17644 unsigned n = dump.push (NULL);
17645 dump (dumper::MACRO) && dump ("Deferred macro %I", identifier (node));
17646
17647 bitmap visible (BITMAP_GGC_ALLOC ());
17648
17649 if (!((imports[0].get_defness () & macro_import::slot::L_UNDEF)
17650 && !imports[0].get_module ()))
17651 {
17652 /* Calculate the set of visible header imports. */
17653 bitmap_copy (visible, headers);
17654 for (unsigned ix = imports.length (); ix--;)
17655 {
17656 const macro_import::slot &slot = imports[ix];
17657 unsigned mod = slot.get_module ();
17658 if ((slot.get_defness () & macro_import::slot::L_UNDEF)
17659 && bitmap_bit_p (visible, mod))
17660 {
17661 bitmap arg = mod ? (*modules)[mod]->slurp->headers : headers;
17662 bitmap_and_compl_into (visible, arg);
17663 bitmap_set_bit (visible, mod);
17664 }
17665 }
17666 }
17667 bitmap_set_bit (visible, 0);
17668
17669 /* Now find the macros that are still visible. */
17670 bool failed = false;
17671 cpp_macro *def = NULL;
17672 vec<macro_export> defs;
17673 defs.create (imports.length ());
17674 for (unsigned ix = imports.length (); ix--;)
17675 {
17676 const macro_import::slot &slot = imports[ix];
17677 unsigned mod = slot.get_module ();
17678 if (bitmap_bit_p (visible, mod))
17679 {
17680 macro_export *pushed = NULL;
17681 if (mod)
17682 {
17683 const module_state *imp = (*modules)[mod];
17684 bytes_in &sec = imp->slurp->macro_defs;
17685 if (!sec.get_overrun ())
17686 {
17687 dump (dumper::MACRO)
17688 && dump ("Reading macro %s%s%s %I module %M at %u",
17689 slot.get_defness () & macro_import::slot::L_UNDEF
17690 ? "#undef" : "",
17691 slot.get_defness () == macro_import::slot::L_BOTH
17692 ? " & " : "",
17693 slot.get_defness () & macro_import::slot::L_DEF
17694 ? "#define" : "",
17695 identifier (node), imp, slot.offset);
17696 sec.random_access (slot.offset);
17697
17698 macro_export exp;
17699 if (slot.get_defness () & macro_import::slot::L_UNDEF)
17700 exp.undef_loc = imp->read_location (sec);
17701 if (slot.get_defness () & macro_import::slot::L_DEF)
17702 exp.def = imp->read_define (sec, reader);
17703 if (sec.get_overrun ())
17704 error_at (loc, "macro definitions of %qE corrupted",
17705 imp->name);
17706 else
17707 pushed = defs.quick_push (exp);
17708 }
17709 }
17710 else
17711 pushed = defs.quick_push ((*macro_exports)[slot.offset]);
17712 if (pushed && pushed->def)
17713 {
17714 if (!def)
17715 def = pushed->def;
17716 else if (cpp_compare_macros (def, pushed->def))
17717 failed = true;
17718 }
17719 }
17720 }
17721
17722 if (failed)
17723 {
17724 /* If LOC is the first loc, this is the end of file check, which
17725 is a warning. */
17726 if (loc == MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0)))
17727 warning_at (loc, OPT_Winvalid_imported_macros,
17728 "inconsistent imported macro definition %qE",
17729 identifier (node));
17730 else
17731 error_at (loc, "inconsistent imported macro definition %qE",
17732 identifier (node));
17733 for (unsigned ix = defs.length (); ix--;)
17734 {
17735 macro_export &exp = defs[ix];
17736 if (exp.undef_loc)
17737 inform (exp.undef_loc, "%<#undef %E%>", identifier (node));
17738 if (exp.def)
17739 inform (exp.def->line, "%<#define %s%>",
17740 cpp_macro_definition (reader, node, exp.def));
17741 }
17742 def = NULL;
17743 }
17744
17745 defs.release ();
17746
17747 dump.pop (n);
17748
17749 return def;
17750 }
17751
17752 /* Stream the static aggregates. Sadly some headers (ahem:
17753 iostream) contain static vars, and rely on them to run global
17754 ctors. */
17755 unsigned
17756 module_state::write_inits (elf_out *to, depset::hash &table, unsigned *crc_ptr)
17757 {
17758 if (!static_aggregates && !tls_aggregates)
17759 return 0;
17760
17761 dump () && dump ("Writing initializers");
17762 dump.indent ();
17763
17764 static_aggregates = nreverse (static_aggregates);
17765 tls_aggregates = nreverse (tls_aggregates);
17766
17767 unsigned count = 0;
17768 trees_out sec (to, this, table, ~0u);
17769 sec.begin ();
17770
17771 tree list = static_aggregates;
17772 for (int passes = 0; passes != 2; passes++)
17773 {
17774 for (tree init = list; init; init = TREE_CHAIN (init))
17775 if (TREE_LANG_FLAG_0 (init))
17776 {
17777 tree decl = TREE_VALUE (init);
17778
17779 dump ("Initializer:%u for %N", count, decl);
17780 sec.tree_node (decl);
17781 ++count;
17782 }
17783
17784 list = tls_aggregates;
17785 }
17786
17787 sec.end (to, to->name (MOD_SNAME_PFX ".ini"), crc_ptr);
17788 dump.outdent ();
17789
17790 return count;
17791 }
17792
17793 /* We have to defer some post-load processing until we've completed
17794 reading, because they can cause more reading. */
17795
17796 static void
17797 post_load_processing ()
17798 {
17799 /* We mustn't cause a GC, our caller should have arranged for that
17800 not to happen. */
17801 gcc_checking_assert (function_depth);
17802
17803 if (!post_load_decls)
17804 return;
17805
17806 tree old_cfd = current_function_decl;
17807 struct function *old_cfun = cfun;
17808 while (post_load_decls->length ())
17809 {
17810 tree decl = post_load_decls->pop ();
17811
17812 dump () && dump ("Post-load processing of %N", decl);
17813
17814 gcc_checking_assert (DECL_ABSTRACT_P (decl));
17815 /* Cloning can cause loading -- specifically operator delete for
17816 the deleting dtor. */
17817 maybe_clone_body (decl);
17818 }
17819
17820 cfun = old_cfun;
17821 current_function_decl = old_cfd;
17822 }
17823
17824 bool
17825 module_state::read_inits (unsigned count)
17826 {
17827 trees_in sec (this);
17828 if (!sec.begin (loc, from (), from ()->find (MOD_SNAME_PFX ".ini")))
17829 return false;
17830 dump () && dump ("Reading %u initializers", count);
17831 dump.indent ();
17832
17833 lazy_snum = ~0u;
17834 for (unsigned ix = 0; ix != count; ix++)
17835 {
17836 /* Merely referencing the decl causes its initializer to be read
17837 and added to the correct list. */
17838 tree decl = sec.tree_node ();
17839
17840 if (sec.get_overrun ())
17841 break;
17842 if (decl)
17843 dump ("Initializer:%u for %N", count, decl);
17844 }
17845 lazy_snum = 0;
17846 post_load_processing ();
17847 dump.outdent ();
17848 if (!sec.end (from ()))
17849 return false;
17850 return true;
17851 }
17852
17853 void
17854 module_state::write_counts (elf_out *to, unsigned counts[MSC_HWM],
17855 unsigned *crc_ptr)
17856 {
17857 bytes_out cfg (to);
17858
17859 cfg.begin ();
17860
17861 for (unsigned ix = MSC_HWM; ix--;)
17862 cfg.u (counts[ix]);
17863
17864 if (dump ())
17865 {
17866 dump ("Cluster sections are [%u,%u)",
17867 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17868 dump ("Bindings %u", counts[MSC_bindings]);
17869 dump ("Pendings %u", counts[MSC_pendings]);
17870 dump ("Entities %u", counts[MSC_entities]);
17871 dump ("Namespaces %u", counts[MSC_namespaces]);
17872 dump ("Macros %u", counts[MSC_macros]);
17873 dump ("Initializers %u", counts[MSC_inits]);
17874 }
17875
17876 cfg.end (to, to->name (MOD_SNAME_PFX ".cnt"), crc_ptr);
17877 }
17878
17879 bool
17880 module_state::read_counts (unsigned counts[MSC_HWM])
17881 {
17882 bytes_in cfg;
17883
17884 if (!cfg.begin (loc, from (), MOD_SNAME_PFX ".cnt"))
17885 return false;
17886
17887 for (unsigned ix = MSC_HWM; ix--;)
17888 counts[ix] = cfg.u ();
17889
17890 if (dump ())
17891 {
17892 dump ("Declaration sections are [%u,%u)",
17893 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17894 dump ("Bindings %u", counts[MSC_bindings]);
17895 dump ("Pendings %u", counts[MSC_pendings]);
17896 dump ("Entities %u", counts[MSC_entities]);
17897 dump ("Namespaces %u", counts[MSC_namespaces]);
17898 dump ("Macros %u", counts[MSC_macros]);
17899 dump ("Initializers %u", counts[MSC_inits]);
17900 }
17901
17902 return cfg.end (from ());
17903 }
17904
17905 /* Tool configuration: MOD_SNAME_PFX .config
17906
17907 This is data that confirms current state (or fails). */
17908
17909 void
17910 module_state::write_config (elf_out *to, module_state_config &config,
17911 unsigned inner_crc)
17912 {
17913 bytes_out cfg (to);
17914
17915 cfg.begin ();
17916
17917 /* Write version and inner crc as u32 values, for easier
17918 debug inspection. */
17919 dump () && dump ("Writing version=%V, inner_crc=%x",
17920 MODULE_VERSION, inner_crc);
17921 cfg.u32 (unsigned (MODULE_VERSION));
17922 cfg.u32 (inner_crc);
17923
17924 cfg.u (to->name (is_header () ? "" : get_flatname ()));
17925
17926 /* Configuration. */
17927 dump () && dump ("Writing target='%s', host='%s'",
17928 TARGET_MACHINE, HOST_MACHINE);
17929 unsigned target = to->name (TARGET_MACHINE);
17930 unsigned host = (!strcmp (TARGET_MACHINE, HOST_MACHINE)
17931 ? target : to->name (HOST_MACHINE));
17932 cfg.u (target);
17933 cfg.u (host);
17934
17935 cfg.str (config.dialect_str);
17936 cfg.u (extensions);
17937
17938 /* Global tree information. We write the globals crc separately,
17939 rather than mix it directly into the overall crc, as it is used
17940 to ensure data match between instances of the compiler, not
17941 integrity of the file. */
17942 dump () && dump ("Writing globals=%u, crc=%x",
17943 fixed_trees->length (), global_crc);
17944 cfg.u (fixed_trees->length ());
17945 cfg.u32 (global_crc);
17946
17947 if (is_partition ())
17948 cfg.u (is_interface ());
17949
17950 cfg.u (config.num_imports);
17951 cfg.u (config.num_partitions);
17952 cfg.u (config.num_entities);
17953
17954 cfg.u (config.ordinary_locs);
17955 cfg.u (config.macro_locs);
17956 cfg.u (config.loc_range_bits);
17957
17958 cfg.u (config.active_init);
17959
17960 /* Now generate CRC, we'll have incorporated the inner CRC because
17961 of its serialization above. */
17962 cfg.end (to, to->name (MOD_SNAME_PFX ".cfg"), &crc);
17963 dump () && dump ("Writing CRC=%x", crc);
17964 }
17965
17966 void
17967 module_state::note_cmi_name ()
17968 {
17969 if (!cmi_noted_p && filename)
17970 {
17971 cmi_noted_p = true;
17972 inform (loc, "compiled module file is %qs",
17973 maybe_add_cmi_prefix (filename));
17974 }
17975 }
17976
17977 bool
17978 module_state::read_config (module_state_config &config)
17979 {
17980 bytes_in cfg;
17981
17982 if (!cfg.begin (loc, from (), MOD_SNAME_PFX ".cfg"))
17983 return false;
17984
17985 /* Check version. */
17986 unsigned my_ver = MODULE_VERSION;
17987 unsigned their_ver = cfg.u32 ();
17988 dump () && dump (my_ver == their_ver ? "Version %V"
17989 : "Expecting %V found %V", my_ver, their_ver);
17990 if (their_ver != my_ver)
17991 {
17992 /* The compiler versions differ. Close enough? */
17993 verstr_t my_string, their_string;
17994
17995 version2string (my_ver, my_string);
17996 version2string (their_ver, their_string);
17997
17998 /* Reject when either is non-experimental or when experimental
17999 major versions differ. */
18000 bool reject_p = ((!IS_EXPERIMENTAL (my_ver)
18001 || !IS_EXPERIMENTAL (their_ver)
18002 || MODULE_MAJOR (my_ver) != MODULE_MAJOR (their_ver))
18003 /* The 'I know what I'm doing' switch. */
18004 && !flag_module_version_ignore);
18005 bool inform_p = true;
18006 if (reject_p)
18007 {
18008 cfg.set_overrun ();
18009 error_at (loc, "compiled module is %sversion %s",
18010 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
18011 their_string);
18012 }
18013 else
18014 inform_p = warning_at (loc, 0, "compiled module is %sversion %s",
18015 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
18016 their_string);
18017
18018 if (inform_p)
18019 {
18020 inform (loc, "compiler is %sversion %s%s%s",
18021 IS_EXPERIMENTAL (my_ver) ? "experimental " : "",
18022 my_string,
18023 reject_p ? "" : flag_module_version_ignore
18024 ? ", be it on your own head!" : ", close enough?",
18025 reject_p ? "" : " \xc2\xaf\\_(\xe3\x83\x84)_/\xc2\xaf");
18026 note_cmi_name ();
18027 }
18028
18029 if (reject_p)
18030 goto done;
18031 }
18032
18033 /* We wrote the inner crc merely to merge it, so simply read it
18034 back and forget it. */
18035 cfg.u32 ();
18036
18037 /* Check module name. */
18038 {
18039 const char *their_name = from ()->name (cfg.u ());
18040 const char *our_name = "";
18041
18042 if (!is_header ())
18043 our_name = get_flatname ();
18044
18045 /* Header units can be aliased, so name checking is
18046 inappropriate. */
18047 if (0 != strcmp (their_name, our_name))
18048 {
18049 error_at (loc,
18050 their_name[0] && our_name[0] ? G_("module %qs found")
18051 : their_name[0]
18052 ? G_("header module expected, module %qs found")
18053 : G_("module %qs expected, header module found"),
18054 their_name[0] ? their_name : our_name);
18055 cfg.set_overrun ();
18056 goto done;
18057 }
18058 }
18059
18060 /* Check the CRC after the above sanity checks, so that the user is
18061 clued in. */
18062 {
18063 unsigned e_crc = crc;
18064 crc = cfg.get_crc ();
18065 dump () && dump ("Reading CRC=%x", crc);
18066 if (!is_direct () && crc != e_crc)
18067 {
18068 error_at (loc, "module %qs CRC mismatch", get_flatname ());
18069 cfg.set_overrun ();
18070 goto done;
18071 }
18072 }
18073
18074 /* Check target & host. */
18075 {
18076 const char *their_target = from ()->name (cfg.u ());
18077 const char *their_host = from ()->name (cfg.u ());
18078 dump () && dump ("Read target='%s', host='%s'", their_target, their_host);
18079 if (strcmp (their_target, TARGET_MACHINE)
18080 || strcmp (their_host, HOST_MACHINE))
18081 {
18082 error_at (loc, "target & host is %qs:%qs, expected %qs:%qs",
18083 their_target, TARGET_MACHINE, their_host, HOST_MACHINE);
18084 cfg.set_overrun ();
18085 goto done;
18086 }
18087 }
18088
18089 /* Check compilation dialect. This must match. */
18090 {
18091 const char *their_dialect = cfg.str ();
18092 if (strcmp (their_dialect, config.dialect_str))
18093 {
18094 error_at (loc, "language dialect differs %qs, expected %qs",
18095 their_dialect, config.dialect_str);
18096 cfg.set_overrun ();
18097 goto done;
18098 }
18099 }
18100
18101 /* Check for extensions. If they set any, we must have them set
18102 too. */
18103 {
18104 unsigned ext = cfg.u ();
18105 unsigned allowed = (flag_openmp ? SE_OPENMP : 0);
18106
18107 if (unsigned bad = ext & ~allowed)
18108 {
18109 if (bad & SE_OPENMP)
18110 error_at (loc, "module contains OpenMP, use %<-fopenmp%> to enable");
18111 cfg.set_overrun ();
18112 goto done;
18113 }
18114 extensions = ext;
18115 }
18116
18117 /* Check global trees. */
18118 {
18119 unsigned their_fixed_length = cfg.u ();
18120 unsigned their_fixed_crc = cfg.u32 ();
18121 dump () && dump ("Read globals=%u, crc=%x",
18122 their_fixed_length, their_fixed_crc);
18123 if (!flag_preprocess_only
18124 && (their_fixed_length != fixed_trees->length ()
18125 || their_fixed_crc != global_crc))
18126 {
18127 error_at (loc, "fixed tree mismatch");
18128 cfg.set_overrun ();
18129 goto done;
18130 }
18131 }
18132
18133 /* All non-partitions are interfaces. */
18134 interface_p = !is_partition () || cfg.u ();
18135
18136 config.num_imports = cfg.u ();
18137 config.num_partitions = cfg.u ();
18138 config.num_entities = cfg.u ();
18139
18140 config.ordinary_locs = cfg.u ();
18141 config.macro_locs = cfg.u ();
18142 config.loc_range_bits = cfg.u ();
18143
18144 config.active_init = cfg.u ();
18145
18146 done:
18147 return cfg.end (from ());
18148 }
18149
18150 /* Comparator for ordering the Ordered Ordinary Location array. */
18151
18152 static int
18153 ool_cmp (const void *a_, const void *b_)
18154 {
18155 auto *a = *static_cast<const module_state *const *> (a_);
18156 auto *b = *static_cast<const module_state *const *> (b_);
18157 if (a == b)
18158 return 0;
18159 else if (a->ordinary_locs.first < b->ordinary_locs.first)
18160 return -1;
18161 else
18162 return +1;
18163 }
18164
18165 /* Use ELROND format to record the following sections:
18166 qualified-names : binding value(s)
18167 MOD_SNAME_PFX.README : human readable, strings
18168 MOD_SNAME_PFX.ENV : environment strings, strings
18169 MOD_SNAME_PFX.nms : namespace hierarchy
18170 MOD_SNAME_PFX.bnd : binding table
18171 MOD_SNAME_PFX.spc : specialization table
18172 MOD_SNAME_PFX.imp : import table
18173 MOD_SNAME_PFX.ent : entity table
18174 MOD_SNAME_PFX.prt : partitions table
18175 MOD_SNAME_PFX.olm : ordinary line maps
18176 MOD_SNAME_PFX.mlm : macro line maps
18177 MOD_SNAME_PFX.def : macro definitions
18178 MOD_SNAME_PFX.mac : macro index
18179 MOD_SNAME_PFX.ini : inits
18180 MOD_SNAME_PFX.cnt : counts
18181 MOD_SNAME_PFX.cfg : config data
18182 */
18183
18184 void
18185 module_state::write_begin (elf_out *to, cpp_reader *reader,
18186 module_state_config &config, unsigned &crc)
18187 {
18188 /* Figure out remapped module numbers, which might elide
18189 partitions. */
18190 bitmap partitions = NULL;
18191 if (!is_header () && !is_partition ())
18192 partitions = BITMAP_GGC_ALLOC ();
18193 write_init_maps ();
18194
18195 unsigned mod_hwm = 1;
18196 for (unsigned ix = 1; ix != modules->length (); ix++)
18197 {
18198 module_state *imp = (*modules)[ix];
18199
18200 /* Promote any non-partition direct import from a partition, unless
18201 we're a partition. */
18202 if (!is_partition () && !imp->is_partition ()
18203 && imp->is_partition_direct ())
18204 imp->directness = MD_PURVIEW_DIRECT;
18205
18206 /* Write any import that is not a partition, unless we're a
18207 partition. */
18208 if (!partitions || !imp->is_partition ())
18209 imp->remap = mod_hwm++;
18210 else
18211 {
18212 dump () && dump ("Partition %M %u", imp, ix);
18213 bitmap_set_bit (partitions, ix);
18214 imp->remap = 0;
18215 /* All interface partitions must be exported. */
18216 if (imp->is_interface () && !bitmap_bit_p (exports, imp->mod))
18217 {
18218 error_at (imp->loc, "interface partition is not exported");
18219 bitmap_set_bit (exports, imp->mod);
18220 }
18221
18222 /* All the partition entities should have been loaded when
18223 loading the partition. */
18224 if (CHECKING_P)
18225 for (unsigned jx = 0; jx != imp->entity_num; jx++)
18226 {
18227 binding_slot *slot = &(*entity_ary)[imp->entity_lwm + jx];
18228 gcc_checking_assert (!slot->is_lazy ());
18229 }
18230 }
18231
18232 if (imp->is_direct () && (imp->remap || imp->is_partition ()))
18233 note_location (imp->imported_from ());
18234 }
18235
18236 if (partitions && bitmap_empty_p (partitions))
18237 /* No partitions present. */
18238 partitions = nullptr;
18239
18240 /* Find the set of decls we must write out. */
18241 depset::hash table (DECL_NAMESPACE_BINDINGS (global_namespace)->size () * 8);
18242 /* Add the specializations before the writables, so that we can
18243 detect injected friend specializations. */
18244 table.add_specializations (true);
18245 table.add_specializations (false);
18246 if (partial_specializations)
18247 {
18248 table.add_partial_entities (partial_specializations);
18249 partial_specializations = NULL;
18250 }
18251 table.add_namespace_entities (global_namespace, partitions);
18252 if (class_members)
18253 {
18254 table.add_class_entities (class_members);
18255 class_members = NULL;
18256 }
18257
18258 /* Now join everything up. */
18259 table.find_dependencies (this);
18260
18261 if (!table.finalize_dependencies ())
18262 {
18263 to->set_error ();
18264 return;
18265 }
18266
18267 #if CHECKING_P
18268 /* We're done verifying at-most once reading, reset to verify
18269 at-most once writing. */
18270 note_defs = note_defs_table_t::create_ggc (1000);
18271 #endif
18272
18273 /* Determine Strongy Connected Components. */
18274 vec<depset *> sccs = table.connect ();
18275
18276 vec_alloc (ool, modules->length ());
18277 for (unsigned ix = modules->length (); --ix;)
18278 {
18279 auto *import = (*modules)[ix];
18280 if (import->loadedness > ML_NONE
18281 && !(partitions && bitmap_bit_p (partitions, import->mod)))
18282 ool->quick_push (import);
18283 }
18284 ool->qsort (ool_cmp);
18285
18286 vec<cpp_hashnode *> *macros = nullptr;
18287 if (is_header ())
18288 macros = prepare_macros (reader);
18289
18290 config.num_imports = mod_hwm;
18291 config.num_partitions = modules->length () - mod_hwm;
18292 auto map_info = write_prepare_maps (&config, bool (config.num_partitions));
18293 unsigned counts[MSC_HWM];
18294 memset (counts, 0, sizeof (counts));
18295
18296 /* depset::cluster is the cluster number,
18297 depset::section is unspecified scratch value.
18298
18299 The following loops make use of the tarjan property that
18300 dependencies will be earlier in the SCCS array. */
18301
18302 /* This first loop determines the number of depsets in each SCC, and
18303 also the number of namespaces we're dealing with. During the
18304 loop, the meaning of a couple of depset fields now change:
18305
18306 depset::cluster -> size_of cluster, if first of cluster & !namespace
18307 depset::section -> section number of cluster (if !namespace). */
18308
18309 unsigned n_spaces = 0;
18310 counts[MSC_sec_lwm] = counts[MSC_sec_hwm] = to->get_section_limit ();
18311 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
18312 {
18313 depset **base = &sccs[ix];
18314
18315 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
18316 {
18317 n_spaces++;
18318 size = 1;
18319 }
18320 else
18321 {
18322 /* Count the members in this cluster. */
18323 for (size = 1; ix + size < sccs.length (); size++)
18324 if (base[size]->cluster != base[0]->cluster)
18325 break;
18326
18327 for (unsigned jx = 0; jx != size; jx++)
18328 {
18329 /* Set the section number. */
18330 base[jx]->cluster = ~(~0u >> 1); /* A bad value. */
18331 base[jx]->section = counts[MSC_sec_hwm];
18332 }
18333
18334 /* Save the size in the first member's cluster slot. */
18335 base[0]->cluster = size;
18336
18337 counts[MSC_sec_hwm]++;
18338 }
18339 }
18340
18341 /* Write the clusters. Namespace decls are put in the spaces array.
18342 The meaning of depset::cluster changes to provide the
18343 unnamed-decl count of the depset's decl (and remains zero for
18344 non-decls and non-unnamed). */
18345 unsigned bytes = 0;
18346 vec<depset *> spaces;
18347 spaces.create (n_spaces);
18348
18349 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
18350 {
18351 depset **base = &sccs[ix];
18352
18353 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
18354 {
18355 tree decl = base[0]->get_entity ();
18356 if (decl == global_namespace)
18357 base[0]->cluster = 0;
18358 else if (!base[0]->is_import ())
18359 {
18360 base[0]->cluster = counts[MSC_entities]++;
18361 spaces.quick_push (base[0]);
18362 counts[MSC_namespaces]++;
18363 if (CHECKING_P)
18364 {
18365 /* Add it to the entity map, such that we can tell it is
18366 part of us. */
18367 bool existed;
18368 unsigned *slot = &entity_map->get_or_insert
18369 (DECL_UID (decl), &existed);
18370 if (existed)
18371 /* It must have come from a partition. */
18372 gcc_checking_assert
18373 (import_entity_module (*slot)->is_partition ());
18374 *slot = ~base[0]->cluster;
18375 }
18376 dump (dumper::CLUSTER) && dump ("Cluster namespace %N", decl);
18377 }
18378 size = 1;
18379 }
18380 else
18381 {
18382 size = base[0]->cluster;
18383
18384 /* Cluster is now used to number entities. */
18385 base[0]->cluster = ~(~0u >> 1); /* A bad value. */
18386
18387 sort_cluster (&table, base, size);
18388
18389 /* Record the section for consistency checking during stream
18390 out -- we don't want to start writing decls in different
18391 sections. */
18392 table.section = base[0]->section;
18393 bytes += write_cluster (to, base, size, table, counts, &crc);
18394 table.section = 0;
18395 }
18396 }
18397
18398 /* depset::cluster - entity number (on entities)
18399 depset::section - cluster number */
18400 /* We'd better have written as many sections and found as many
18401 namespaces as we predicted. */
18402 gcc_assert (counts[MSC_sec_hwm] == to->get_section_limit ()
18403 && spaces.length () == counts[MSC_namespaces]);
18404
18405 /* Write the entitites. None happens if we contain namespaces or
18406 nothing. */
18407 config.num_entities = counts[MSC_entities];
18408 if (counts[MSC_entities])
18409 write_entities (to, sccs, counts[MSC_entities], &crc);
18410
18411 /* Write the namespaces. */
18412 if (counts[MSC_namespaces])
18413 write_namespaces (to, spaces, counts[MSC_namespaces], &crc);
18414
18415 /* Write the bindings themselves. */
18416 counts[MSC_bindings] = write_bindings (to, sccs, &crc);
18417
18418 /* Write the unnamed. */
18419 counts[MSC_pendings] = write_pendings (to, sccs, table, &crc);
18420
18421 /* Write the import table. */
18422 if (config.num_imports > 1)
18423 write_imports (to, &crc);
18424
18425 /* Write elided partition table. */
18426 if (config.num_partitions)
18427 write_partitions (to, config.num_partitions, &crc);
18428
18429 /* Write the line maps. */
18430 if (config.ordinary_locs)
18431 write_ordinary_maps (to, map_info, bool (config.num_partitions), &crc);
18432 if (config.macro_locs)
18433 write_macro_maps (to, map_info, &crc);
18434
18435 if (is_header ())
18436 {
18437 counts[MSC_macros] = write_macros (to, macros, &crc);
18438 counts[MSC_inits] = write_inits (to, table, &crc);
18439 vec_free (macros);
18440 }
18441
18442 unsigned clusters = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
18443 dump () && dump ("Wrote %u clusters, average %u bytes/cluster",
18444 clusters, (bytes + clusters / 2) / (clusters + !clusters));
18445 trees_out::instrument ();
18446
18447 write_counts (to, counts, &crc);
18448
18449 spaces.release ();
18450 sccs.release ();
18451
18452 vec_free (macro_loc_remap);
18453 vec_free (ord_loc_remap);
18454 vec_free (ool);
18455
18456 // FIXME:QOI: Have a command line switch to control more detailed
18457 // information (which might leak data you do not want to leak).
18458 // Perhaps (some of) the write_readme contents should also be
18459 // so-controlled.
18460 if (false)
18461 write_env (to);
18462 }
18463
18464 // Finish module writing after we've emitted all dynamic initializers.
18465
18466 void
18467 module_state::write_end (elf_out *to, cpp_reader *reader,
18468 module_state_config &config, unsigned &crc)
18469 {
18470 /* And finish up. */
18471 write_config (to, config, crc);
18472
18473 /* Human-readable info. */
18474 write_readme (to, reader, config.dialect_str);
18475
18476 dump () && dump ("Wrote %u sections", to->get_section_limit ());
18477 }
18478
18479 /* Initial read of a CMI. Checks config, loads up imports and line
18480 maps. */
18481
18482 bool
18483 module_state::read_initial (cpp_reader *reader)
18484 {
18485 module_state_config config;
18486 bool ok = true;
18487
18488 if (ok && !from ()->begin (loc))
18489 ok = false;
18490
18491 if (ok && !read_config (config))
18492 ok = false;
18493
18494 bool have_locs = ok && read_prepare_maps (&config);
18495
18496 /* Ordinary maps before the imports. */
18497 if (!(have_locs && config.ordinary_locs))
18498 ordinary_locs.first = line_table->highest_location + 1;
18499 else if (!read_ordinary_maps (config.ordinary_locs, config.loc_range_bits))
18500 ok = false;
18501
18502 /* Allocate the REMAP vector. */
18503 slurp->alloc_remap (config.num_imports);
18504
18505 if (ok)
18506 {
18507 /* Read the import table. Decrement current to stop this CMI
18508 from being evicted during the import. */
18509 slurp->current--;
18510 if (config.num_imports > 1 && !read_imports (reader, line_table))
18511 ok = false;
18512 slurp->current++;
18513 }
18514
18515 /* Read the elided partition table, if we're the primary partition. */
18516 if (ok && config.num_partitions && is_module ()
18517 && !read_partitions (config.num_partitions))
18518 ok = false;
18519
18520 /* Determine the module's number. */
18521 gcc_checking_assert (mod == MODULE_UNKNOWN);
18522 gcc_checking_assert (this != (*modules)[0]);
18523
18524 {
18525 /* Allocate space in the entities array now -- that array must be
18526 monotonically in step with the modules array. */
18527 entity_lwm = vec_safe_length (entity_ary);
18528 entity_num = config.num_entities;
18529 gcc_checking_assert (modules->length () == 1
18530 || modules->last ()->entity_lwm <= entity_lwm);
18531 vec_safe_reserve (entity_ary, config.num_entities);
18532
18533 binding_slot slot;
18534 slot.u.binding = NULL_TREE;
18535 for (unsigned count = config.num_entities; count--;)
18536 entity_ary->quick_push (slot);
18537 }
18538
18539 /* We'll run out of other resources before we run out of module
18540 indices. */
18541 mod = modules->length ();
18542 vec_safe_push (modules, this);
18543
18544 /* We always import and export ourselves. */
18545 bitmap_set_bit (imports, mod);
18546 bitmap_set_bit (exports, mod);
18547
18548 if (ok)
18549 (*slurp->remap)[0] = mod << 1;
18550 dump () && dump ("Assigning %M module number %u", this, mod);
18551
18552 /* We should not have been frozen during the importing done by
18553 read_config. */
18554 gcc_assert (!from ()->is_frozen ());
18555
18556 /* Macro maps after the imports. */
18557 if (!(ok && have_locs && config.macro_locs))
18558 macro_locs.first = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
18559 else if (!read_macro_maps (config.macro_locs))
18560 ok = false;
18561
18562 /* Note whether there's an active initializer. */
18563 active_init_p = !is_header () && bool (config.active_init);
18564
18565 gcc_assert (slurp->current == ~0u);
18566 return ok;
18567 }
18568
18569 /* Read a preprocessor state. */
18570
18571 bool
18572 module_state::read_preprocessor (bool outermost)
18573 {
18574 gcc_checking_assert (is_header () && slurp
18575 && slurp->remap_module (0) == mod);
18576
18577 if (loadedness == ML_PREPROCESSOR)
18578 return !(from () && from ()->get_error ());
18579
18580 bool ok = true;
18581
18582 /* Read direct header imports. */
18583 unsigned len = slurp->remap->length ();
18584 for (unsigned ix = 1; ok && ix != len; ix++)
18585 {
18586 unsigned map = (*slurp->remap)[ix];
18587 if (map & 1)
18588 {
18589 module_state *import = (*modules)[map >> 1];
18590 if (import->is_header ())
18591 {
18592 ok = import->read_preprocessor (false);
18593 bitmap_ior_into (slurp->headers, import->slurp->headers);
18594 }
18595 }
18596 }
18597
18598 /* Record as a direct header. */
18599 if (ok)
18600 bitmap_set_bit (slurp->headers, mod);
18601
18602 if (ok && !read_macros ())
18603 ok = false;
18604
18605 loadedness = ML_PREPROCESSOR;
18606 announce ("macros");
18607
18608 if (flag_preprocess_only)
18609 /* We're done with the string table. */
18610 from ()->release ();
18611
18612 return check_read (outermost, ok);
18613 }
18614
18615 /* Read language state. */
18616
18617 bool
18618 module_state::read_language (bool outermost)
18619 {
18620 gcc_checking_assert (!lazy_snum);
18621
18622 if (loadedness == ML_LANGUAGE)
18623 return !(slurp && from () && from ()->get_error ());
18624
18625 gcc_checking_assert (slurp && slurp->current == ~0u
18626 && slurp->remap_module (0) == mod);
18627
18628 bool ok = true;
18629
18630 /* Read direct imports. */
18631 unsigned len = slurp->remap->length ();
18632 for (unsigned ix = 1; ok && ix != len; ix++)
18633 {
18634 unsigned map = (*slurp->remap)[ix];
18635 if (map & 1)
18636 {
18637 module_state *import = (*modules)[map >> 1];
18638 if (!import->read_language (false))
18639 ok = false;
18640 }
18641 }
18642
18643 unsigned counts[MSC_HWM];
18644
18645 if (ok && !read_counts (counts))
18646 ok = false;
18647
18648 function_depth++; /* Prevent unexpected GCs. */
18649
18650 if (ok && counts[MSC_entities] != entity_num)
18651 ok = false;
18652 if (ok && counts[MSC_entities]
18653 && !read_entities (counts[MSC_entities],
18654 counts[MSC_sec_lwm], counts[MSC_sec_hwm]))
18655 ok = false;
18656
18657 /* Read the namespace hierarchy. */
18658 if (ok && counts[MSC_namespaces]
18659 && !read_namespaces (counts[MSC_namespaces]))
18660 ok = false;
18661
18662 if (ok && !read_bindings (counts[MSC_bindings],
18663 counts[MSC_sec_lwm], counts[MSC_sec_hwm]))
18664 ok = false;
18665
18666 /* And unnamed. */
18667 if (ok && counts[MSC_pendings] && !read_pendings (counts[MSC_pendings]))
18668 ok = false;
18669
18670 if (ok)
18671 {
18672 slurp->remaining = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
18673 available_clusters += counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
18674 }
18675
18676 if (!flag_module_lazy
18677 || (is_partition ()
18678 && module_interface_p ()
18679 && !module_partition_p ()))
18680 {
18681 /* Read the sections in forward order, so that dependencies are read
18682 first. See note about tarjan_connect. */
18683 ggc_collect ();
18684
18685 lazy_snum = ~0u;
18686
18687 unsigned hwm = counts[MSC_sec_hwm];
18688 for (unsigned ix = counts[MSC_sec_lwm]; ok && ix != hwm; ix++)
18689 if (!load_section (ix, NULL))
18690 {
18691 ok = false;
18692 break;
18693 }
18694 lazy_snum = 0;
18695 post_load_processing ();
18696
18697 ggc_collect ();
18698
18699 if (ok && CHECKING_P)
18700 for (unsigned ix = 0; ix != entity_num; ix++)
18701 gcc_assert (!(*entity_ary)[ix + entity_lwm].is_lazy ());
18702 }
18703
18704 // If the import is a header-unit, we need to register initializers
18705 // of any static objects it contains (looking at you _Ioinit).
18706 // Notice, the ordering of these initializers will be that of a
18707 // dynamic initializer at this point in the current TU. (Other
18708 // instances of these objects in other TUs will be initialized as
18709 // part of that TU's global initializers.)
18710 if (ok && counts[MSC_inits] && !read_inits (counts[MSC_inits]))
18711 ok = false;
18712
18713 function_depth--;
18714
18715 announce (flag_module_lazy ? "lazy" : "imported");
18716 loadedness = ML_LANGUAGE;
18717
18718 gcc_assert (slurp->current == ~0u);
18719
18720 /* We're done with the string table. */
18721 from ()->release ();
18722
18723 return check_read (outermost, ok);
18724 }
18725
18726 bool
18727 module_state::maybe_defrost ()
18728 {
18729 bool ok = true;
18730 if (from ()->is_frozen ())
18731 {
18732 if (lazy_open >= lazy_limit)
18733 freeze_an_elf ();
18734 dump () && dump ("Defrosting '%s'", filename);
18735 ok = from ()->defrost (maybe_add_cmi_prefix (filename));
18736 lazy_open++;
18737 }
18738
18739 return ok;
18740 }
18741
18742 /* Load section SNUM, dealing with laziness. It doesn't matter if we
18743 have multiple concurrent loads, because we do not use TREE_VISITED
18744 when reading back in. */
18745
18746 bool
18747 module_state::load_section (unsigned snum, binding_slot *mslot)
18748 {
18749 if (from ()->get_error ())
18750 return false;
18751
18752 if (snum >= slurp->current)
18753 from ()->set_error (elf::E_BAD_LAZY);
18754 else if (maybe_defrost ())
18755 {
18756 unsigned old_current = slurp->current;
18757 slurp->current = snum;
18758 slurp->lru = 0; /* Do not swap out. */
18759 slurp->remaining--;
18760 read_cluster (snum);
18761 slurp->lru = ++lazy_lru;
18762 slurp->current = old_current;
18763 }
18764
18765 if (mslot && mslot->is_lazy ())
18766 {
18767 /* Oops, the section didn't set this slot. */
18768 from ()->set_error (elf::E_BAD_DATA);
18769 *mslot = NULL_TREE;
18770 }
18771
18772 bool ok = !from ()->get_error ();
18773 if (!ok)
18774 {
18775 error_at (loc, "failed to read compiled module cluster %u: %s",
18776 snum, from ()->get_error (filename));
18777 note_cmi_name ();
18778 }
18779
18780 maybe_completed_reading ();
18781
18782 return ok;
18783 }
18784
18785 void
18786 module_state::maybe_completed_reading ()
18787 {
18788 if (loadedness == ML_LANGUAGE && slurp->current == ~0u && !slurp->remaining)
18789 {
18790 lazy_open--;
18791 /* We no longer need the macros, all tokenizing has been done. */
18792 slurp->release_macros ();
18793
18794 from ()->end ();
18795 slurp->close ();
18796 slurped ();
18797 }
18798 }
18799
18800 /* After a reading operation, make sure things are still ok. If not,
18801 emit an error and clean up. */
18802
18803 bool
18804 module_state::check_read (bool outermost, bool ok)
18805 {
18806 gcc_checking_assert (!outermost || slurp->current == ~0u);
18807
18808 if (!ok)
18809 from ()->set_error ();
18810
18811 if (int e = from ()->get_error ())
18812 {
18813 error_at (loc, "failed to read compiled module: %s",
18814 from ()->get_error (filename));
18815 note_cmi_name ();
18816
18817 if (e == EMFILE
18818 || e == ENFILE
18819 #if MAPPED_READING
18820 || e == ENOMEM
18821 #endif
18822 || false)
18823 inform (loc, "consider using %<-fno-module-lazy%>,"
18824 " increasing %<-param-lazy-modules=%u%> value,"
18825 " or increasing the per-process file descriptor limit",
18826 param_lazy_modules);
18827 else if (e == ENOENT)
18828 inform (loc, "imports must be built before being imported");
18829
18830 if (outermost)
18831 fatal_error (loc, "returning to the gate for a mechanical issue");
18832
18833 ok = false;
18834 }
18835
18836 maybe_completed_reading ();
18837
18838 return ok;
18839 }
18840
18841 /* Return the IDENTIFIER_NODE naming module IX. This is the name
18842 including dots. */
18843
18844 char const *
18845 module_name (unsigned ix, bool header_ok)
18846 {
18847 if (modules)
18848 {
18849 module_state *imp = (*modules)[ix];
18850
18851 if (ix && !imp->name)
18852 imp = imp->parent;
18853
18854 if (header_ok || !imp->is_header ())
18855 return imp->get_flatname ();
18856 }
18857
18858 return NULL;
18859 }
18860
18861 /* Return the bitmap describing what modules are imported. Remember,
18862 we always import ourselves. */
18863
18864 bitmap
18865 get_import_bitmap ()
18866 {
18867 return (*modules)[0]->imports;
18868 }
18869
18870 /* Return the visible imports and path of instantiation for an
18871 instantiation at TINST. If TINST is nullptr, we're not in an
18872 instantiation, and thus will return the visible imports of the
18873 current TU (and NULL *PATH_MAP_P). We cache the information on
18874 the tinst level itself. */
18875
18876 static bitmap
18877 path_of_instantiation (tinst_level *tinst, bitmap *path_map_p)
18878 {
18879 gcc_checking_assert (modules_p ());
18880
18881 if (!tinst)
18882 {
18883 /* Not inside an instantiation, just the regular case. */
18884 *path_map_p = nullptr;
18885 return get_import_bitmap ();
18886 }
18887
18888 if (!tinst->path)
18889 {
18890 /* Calculate. */
18891 bitmap visible = path_of_instantiation (tinst->next, path_map_p);
18892 bitmap path_map = *path_map_p;
18893
18894 if (!path_map)
18895 {
18896 path_map = BITMAP_GGC_ALLOC ();
18897 bitmap_set_bit (path_map, 0);
18898 }
18899
18900 tree decl = tinst->tldcl;
18901 if (TREE_CODE (decl) == TREE_LIST)
18902 decl = TREE_PURPOSE (decl);
18903 if (TYPE_P (decl))
18904 decl = TYPE_NAME (decl);
18905
18906 if (unsigned mod = get_originating_module (decl))
18907 if (!bitmap_bit_p (path_map, mod))
18908 {
18909 /* This is brand new information! */
18910 bitmap new_path = BITMAP_GGC_ALLOC ();
18911 bitmap_copy (new_path, path_map);
18912 bitmap_set_bit (new_path, mod);
18913 path_map = new_path;
18914
18915 bitmap imports = (*modules)[mod]->imports;
18916 if (bitmap_intersect_compl_p (imports, visible))
18917 {
18918 /* IMPORTS contains additional modules to VISIBLE. */
18919 bitmap new_visible = BITMAP_GGC_ALLOC ();
18920
18921 bitmap_ior (new_visible, visible, imports);
18922 visible = new_visible;
18923 }
18924 }
18925
18926 tinst->path = path_map;
18927 tinst->visible = visible;
18928 }
18929
18930 *path_map_p = tinst->path;
18931 return tinst->visible;
18932 }
18933
18934 /* Return the bitmap describing what modules are visible along the
18935 path of instantiation. If we're not an instantiation, this will be
18936 the visible imports of the TU. *PATH_MAP_P is filled in with the
18937 modules owning the instantiation path -- we see the module-linkage
18938 entities of those modules. */
18939
18940 bitmap
18941 visible_instantiation_path (bitmap *path_map_p)
18942 {
18943 if (!modules_p ())
18944 return NULL;
18945
18946 return path_of_instantiation (current_instantiation (), path_map_p);
18947 }
18948
18949 /* We've just directly imported IMPORT. Update our import/export
18950 bitmaps. IS_EXPORT is true if we're reexporting the OTHER. */
18951
18952 void
18953 module_state::set_import (module_state const *import, bool is_export)
18954 {
18955 gcc_checking_assert (this != import);
18956
18957 /* We see IMPORT's exports (which includes IMPORT). If IMPORT is
18958 the primary interface or a partition we'll see its imports. */
18959 bitmap_ior_into (imports, import->is_module () || import->is_partition ()
18960 ? import->imports : import->exports);
18961
18962 if (is_export)
18963 /* We'll export OTHER's exports. */
18964 bitmap_ior_into (exports, import->exports);
18965 }
18966
18967 /* Return the declaring entity of DECL. That is the decl determining
18968 how to decorate DECL with module information. Returns NULL_TREE if
18969 it's the global module. */
18970
18971 tree
18972 get_originating_module_decl (tree decl)
18973 {
18974 /* An enumeration constant. */
18975 if (TREE_CODE (decl) == CONST_DECL
18976 && DECL_CONTEXT (decl)
18977 && (TREE_CODE (DECL_CONTEXT (decl)) == ENUMERAL_TYPE))
18978 decl = TYPE_NAME (DECL_CONTEXT (decl));
18979 else if (TREE_CODE (decl) == FIELD_DECL
18980 || TREE_CODE (decl) == USING_DECL
18981 || CONST_DECL_USING_P (decl))
18982 {
18983 decl = DECL_CONTEXT (decl);
18984 if (TREE_CODE (decl) != FUNCTION_DECL)
18985 decl = TYPE_NAME (decl);
18986 }
18987
18988 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
18989 || TREE_CODE (decl) == FUNCTION_DECL
18990 || TREE_CODE (decl) == TYPE_DECL
18991 || TREE_CODE (decl) == VAR_DECL
18992 || TREE_CODE (decl) == CONCEPT_DECL
18993 || TREE_CODE (decl) == NAMESPACE_DECL);
18994
18995 for (;;)
18996 {
18997 /* Uninstantiated template friends are owned by the befriending
18998 class -- not their context. */
18999 if (TREE_CODE (decl) == TEMPLATE_DECL
19000 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
19001 decl = TYPE_NAME (DECL_CHAIN (decl));
19002
19003 /* An imported temploid friend is attached to the same module the
19004 befriending class was. */
19005 if (imported_temploid_friends)
19006 if (tree *slot = imported_temploid_friends->get (decl))
19007 decl = *slot;
19008
19009 int use;
19010 if (tree ti = node_template_info (decl, use))
19011 {
19012 decl = TI_TEMPLATE (ti);
19013 if (TREE_CODE (decl) != TEMPLATE_DECL)
19014 {
19015 /* A friend template specialization. */
19016 gcc_checking_assert (OVL_P (decl));
19017 return global_namespace;
19018 }
19019 }
19020 else
19021 {
19022 tree ctx = CP_DECL_CONTEXT (decl);
19023 if (TREE_CODE (ctx) == NAMESPACE_DECL)
19024 break;
19025
19026 if (TYPE_P (ctx))
19027 {
19028 ctx = TYPE_NAME (ctx);
19029 if (!ctx)
19030 {
19031 /* Some kind of internal type. */
19032 gcc_checking_assert (DECL_ARTIFICIAL (decl));
19033 return global_namespace;
19034 }
19035 }
19036 decl = ctx;
19037 }
19038 }
19039
19040 return decl;
19041 }
19042
19043 int
19044 get_originating_module (tree decl, bool for_mangle)
19045 {
19046 tree owner = get_originating_module_decl (decl);
19047 tree not_tmpl = STRIP_TEMPLATE (owner);
19048
19049 if (!DECL_LANG_SPECIFIC (not_tmpl))
19050 return for_mangle ? -1 : 0;
19051
19052 if (for_mangle && !DECL_MODULE_ATTACH_P (not_tmpl))
19053 return -1;
19054
19055 int mod = !DECL_MODULE_IMPORT_P (not_tmpl) ? 0 : get_importing_module (owner);
19056 gcc_checking_assert (!for_mangle || !(*modules)[mod]->is_header ());
19057 return mod;
19058 }
19059
19060 unsigned
19061 get_importing_module (tree decl, bool flexible)
19062 {
19063 unsigned index = import_entity_index (decl, flexible);
19064 if (index == ~(~0u >> 1))
19065 return -1;
19066 module_state *module = import_entity_module (index);
19067
19068 return module->mod;
19069 }
19070
19071 /* Is it permissible to redeclare OLDDECL with NEWDECL.
19072
19073 If NEWDECL is NULL, assumes that OLDDECL will be redeclared using
19074 the current scope's module and attachment. */
19075
19076 bool
19077 module_may_redeclare (tree olddecl, tree newdecl)
19078 {
19079 tree decl = olddecl;
19080 for (;;)
19081 {
19082 tree ctx = CP_DECL_CONTEXT (decl);
19083 if (TREE_CODE (ctx) == NAMESPACE_DECL)
19084 // Found the namespace-scope decl.
19085 break;
19086 if (!CLASS_TYPE_P (ctx))
19087 // We've met a non-class scope. Such a thing is not
19088 // reopenable, so we must be ok.
19089 return true;
19090 decl = TYPE_NAME (ctx);
19091 }
19092
19093 int use_tpl = 0;
19094 if (node_template_info (STRIP_TEMPLATE (decl), use_tpl) && use_tpl)
19095 // Specializations of any kind can be redeclared anywhere.
19096 // FIXME: Should we be checking this in more places on the scope chain?
19097 return true;
19098
19099 module_state *old_mod = (*modules)[0];
19100 module_state *new_mod = old_mod;
19101
19102 tree old_origin = get_originating_module_decl (decl);
19103 tree old_inner = STRIP_TEMPLATE (old_origin);
19104 bool olddecl_attached_p = (DECL_LANG_SPECIFIC (old_inner)
19105 && DECL_MODULE_ATTACH_P (old_inner));
19106 if (DECL_LANG_SPECIFIC (old_inner) && DECL_MODULE_IMPORT_P (old_inner))
19107 {
19108 unsigned index = import_entity_index (old_origin);
19109 old_mod = import_entity_module (index);
19110 }
19111
19112 bool newdecl_attached_p = module_attach_p ();
19113 if (newdecl)
19114 {
19115 tree new_origin = get_originating_module_decl (newdecl);
19116 tree new_inner = STRIP_TEMPLATE (new_origin);
19117 newdecl_attached_p = (DECL_LANG_SPECIFIC (new_inner)
19118 && DECL_MODULE_ATTACH_P (new_inner));
19119 if (DECL_LANG_SPECIFIC (new_inner) && DECL_MODULE_IMPORT_P (new_inner))
19120 {
19121 unsigned index = import_entity_index (new_origin);
19122 new_mod = import_entity_module (index);
19123 }
19124 }
19125
19126 /* Module attachment needs to match. */
19127 if (olddecl_attached_p == newdecl_attached_p)
19128 {
19129 if (!olddecl_attached_p)
19130 /* Both are GM entities, OK. */
19131 return true;
19132
19133 if (new_mod == old_mod
19134 || (new_mod && get_primary (new_mod) == get_primary (old_mod)))
19135 /* Both attached to same named module, OK. */
19136 return true;
19137 }
19138
19139 /* Attached to different modules, error. */
19140 decl = newdecl ? newdecl : olddecl;
19141 location_t loc = newdecl ? DECL_SOURCE_LOCATION (newdecl) : input_location;
19142 if (DECL_IS_UNDECLARED_BUILTIN (olddecl))
19143 {
19144 if (newdecl_attached_p)
19145 error_at (loc, "declaring %qD in module %qs conflicts with builtin "
19146 "in global module", decl, new_mod->get_flatname ());
19147 else
19148 error_at (loc, "declaration %qD conflicts with builtin", decl);
19149 }
19150 else if (DECL_LANG_SPECIFIC (old_inner) && DECL_MODULE_IMPORT_P (old_inner))
19151 {
19152 auto_diagnostic_group d;
19153 if (newdecl_attached_p)
19154 error_at (loc, "redeclaring %qD in module %qs conflicts with import",
19155 decl, new_mod->get_flatname ());
19156 else
19157 error_at (loc, "redeclaring %qD in global module conflicts with import",
19158 decl);
19159
19160 if (olddecl_attached_p)
19161 inform (DECL_SOURCE_LOCATION (olddecl),
19162 "import declared attached to module %qs",
19163 old_mod->get_flatname ());
19164 else
19165 inform (DECL_SOURCE_LOCATION (olddecl),
19166 "import declared in global module");
19167 }
19168 else
19169 {
19170 auto_diagnostic_group d;
19171 if (newdecl_attached_p)
19172 error_at (loc, "conflicting declaration of %qD in module %qs",
19173 decl, new_mod->get_flatname ());
19174 else
19175 error_at (loc, "conflicting declaration of %qD in global module",
19176 decl);
19177
19178 if (olddecl_attached_p)
19179 inform (DECL_SOURCE_LOCATION (olddecl),
19180 "previously declared in module %qs",
19181 old_mod->get_flatname ());
19182 else
19183 inform (DECL_SOURCE_LOCATION (olddecl),
19184 "previously declared in global module");
19185 }
19186 return false;
19187 }
19188
19189 /* DECL is being created by this TU. Record it came from here. We
19190 record module purview, so we can see if partial or explicit
19191 specialization needs to be written out, even though its purviewness
19192 comes from the most general template. */
19193
19194 void
19195 set_instantiating_module (tree decl)
19196 {
19197 gcc_assert (TREE_CODE (decl) == FUNCTION_DECL
19198 || VAR_P (decl)
19199 || TREE_CODE (decl) == TYPE_DECL
19200 || TREE_CODE (decl) == CONCEPT_DECL
19201 || TREE_CODE (decl) == TEMPLATE_DECL
19202 || (TREE_CODE (decl) == NAMESPACE_DECL
19203 && DECL_NAMESPACE_ALIAS (decl)));
19204
19205 if (!modules_p ())
19206 return;
19207
19208 decl = STRIP_TEMPLATE (decl);
19209
19210 if (!DECL_LANG_SPECIFIC (decl) && module_purview_p ())
19211 retrofit_lang_decl (decl);
19212
19213 if (DECL_LANG_SPECIFIC (decl))
19214 {
19215 DECL_MODULE_PURVIEW_P (decl) = module_purview_p ();
19216 /* If this was imported, we'll still be in the entity_hash. */
19217 DECL_MODULE_IMPORT_P (decl) = false;
19218 }
19219 }
19220
19221 /* If DECL is a class member, whose class is not defined in this TU
19222 (it was imported), remember this decl. */
19223
19224 void
19225 set_defining_module (tree decl)
19226 {
19227 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
19228 || !DECL_MODULE_IMPORT_P (decl));
19229
19230 if (module_maybe_has_cmi_p ())
19231 {
19232 /* We need to track all declarations within a module, not just those
19233 in the module purview, because we don't necessarily know yet if
19234 this module will require a CMI while in the global fragment. */
19235 tree ctx = DECL_CONTEXT (decl);
19236 if (ctx
19237 && (TREE_CODE (ctx) == RECORD_TYPE || TREE_CODE (ctx) == UNION_TYPE)
19238 && DECL_LANG_SPECIFIC (TYPE_NAME (ctx))
19239 && DECL_MODULE_IMPORT_P (TYPE_NAME (ctx)))
19240 {
19241 /* This entity's context is from an import. We may need to
19242 record this entity to make sure we emit it in the CMI.
19243 Template specializations are in the template hash tables,
19244 so we don't need to record them here as well. */
19245 int use_tpl = -1;
19246 tree ti = node_template_info (decl, use_tpl);
19247 if (use_tpl <= 0)
19248 {
19249 if (ti)
19250 {
19251 gcc_checking_assert (!use_tpl);
19252 /* Get to the TEMPLATE_DECL. */
19253 decl = TI_TEMPLATE (ti);
19254 }
19255
19256 /* Record it on the class_members list. */
19257 vec_safe_push (class_members, decl);
19258 }
19259 }
19260 }
19261 }
19262
19263 /* Also remember DECL if it's a newly declared class template partial
19264 specialization, because these are not necessarily added to the
19265 instantiation tables. */
19266
19267 void
19268 set_defining_module_for_partial_spec (tree decl)
19269 {
19270 if (module_maybe_has_cmi_p ()
19271 && DECL_IMPLICIT_TYPEDEF_P (decl)
19272 && CLASSTYPE_TEMPLATE_SPECIALIZATION (TREE_TYPE (decl)))
19273 vec_safe_push (partial_specializations, decl);
19274 }
19275
19276 void
19277 set_originating_module (tree decl, bool friend_p ATTRIBUTE_UNUSED)
19278 {
19279 set_instantiating_module (decl);
19280
19281 if (!DECL_NAMESPACE_SCOPE_P (decl))
19282 return;
19283
19284 gcc_checking_assert (friend_p || decl == get_originating_module_decl (decl));
19285
19286 if (module_attach_p ())
19287 {
19288 retrofit_lang_decl (decl);
19289 DECL_MODULE_ATTACH_P (decl) = true;
19290 }
19291
19292 if (!module_exporting_p ())
19293 return;
19294
19295 // FIXME: Check ill-formed linkage
19296 DECL_MODULE_EXPORT_P (decl) = true;
19297 }
19298
19299 /* DECL is keyed to CTX for odr purposes. */
19300
19301 void
19302 maybe_key_decl (tree ctx, tree decl)
19303 {
19304 if (!modules_p ())
19305 return;
19306
19307 /* We only need to deal with lambdas attached to var, field,
19308 parm, or type decls. */
19309 if (TREE_CODE (ctx) != VAR_DECL
19310 && TREE_CODE (ctx) != FIELD_DECL
19311 && TREE_CODE (ctx) != PARM_DECL
19312 && TREE_CODE (ctx) != TYPE_DECL)
19313 return;
19314
19315 /* For fields, key it to the containing type to handle deduplication
19316 correctly. */
19317 if (TREE_CODE (ctx) == FIELD_DECL)
19318 ctx = TYPE_NAME (DECL_CONTEXT (ctx));
19319
19320 if (!keyed_table)
19321 keyed_table = new keyed_map_t (EXPERIMENT (1, 400));
19322
19323 auto &vec = keyed_table->get_or_insert (ctx);
19324 if (!vec.length ())
19325 {
19326 retrofit_lang_decl (ctx);
19327 DECL_MODULE_KEYED_DECLS_P (ctx) = true;
19328 }
19329 vec.safe_push (decl);
19330 }
19331
19332 /* DECL is an instantiated friend that should be attached to the same
19333 module that ORIG is. */
19334
19335 void
19336 propagate_defining_module (tree decl, tree orig)
19337 {
19338 if (!modules_p ())
19339 return;
19340
19341 tree not_tmpl = STRIP_TEMPLATE (orig);
19342 if (DECL_LANG_SPECIFIC (not_tmpl) && DECL_MODULE_ATTACH_P (not_tmpl))
19343 {
19344 tree inner = STRIP_TEMPLATE (decl);
19345 retrofit_lang_decl (inner);
19346 DECL_MODULE_ATTACH_P (inner) = true;
19347 }
19348
19349 if (DECL_LANG_SPECIFIC (not_tmpl) && DECL_MODULE_IMPORT_P (not_tmpl))
19350 {
19351 bool exists = imported_temploid_friends->put (decl, orig);
19352
19353 /* We should only be called if lookup for an existing decl
19354 failed, in which case there shouldn't already be an entry
19355 in the map. */
19356 gcc_assert (!exists);
19357 }
19358 }
19359
19360 /* DECL is being freed, clear data we don't need anymore. */
19361
19362 void
19363 remove_defining_module (tree decl)
19364 {
19365 if (!modules_p ())
19366 return;
19367
19368 if (imported_temploid_friends)
19369 imported_temploid_friends->remove (decl);
19370 }
19371
19372 /* Create the flat name string. It is simplest to have it handy. */
19373
19374 void
19375 module_state::set_flatname ()
19376 {
19377 gcc_checking_assert (!flatname);
19378 if (parent)
19379 {
19380 auto_vec<tree,5> ids;
19381 size_t len = 0;
19382 char const *primary = NULL;
19383 size_t pfx_len = 0;
19384
19385 for (module_state *probe = this;
19386 probe;
19387 probe = probe->parent)
19388 if (is_partition () && !probe->is_partition ())
19389 {
19390 primary = probe->get_flatname ();
19391 pfx_len = strlen (primary);
19392 break;
19393 }
19394 else
19395 {
19396 ids.safe_push (probe->name);
19397 len += IDENTIFIER_LENGTH (probe->name) + 1;
19398 }
19399
19400 char *flat = XNEWVEC (char, pfx_len + len + is_partition ());
19401 flatname = flat;
19402
19403 if (primary)
19404 {
19405 memcpy (flat, primary, pfx_len);
19406 flat += pfx_len;
19407 *flat++ = ':';
19408 }
19409
19410 for (unsigned len = 0; ids.length ();)
19411 {
19412 if (len)
19413 flat[len++] = '.';
19414 tree elt = ids.pop ();
19415 unsigned l = IDENTIFIER_LENGTH (elt);
19416 memcpy (flat + len, IDENTIFIER_POINTER (elt), l + 1);
19417 len += l;
19418 }
19419 }
19420 else if (is_header ())
19421 flatname = TREE_STRING_POINTER (name);
19422 else
19423 flatname = IDENTIFIER_POINTER (name);
19424 }
19425
19426 /* Read the CMI file for a module. */
19427
19428 bool
19429 module_state::do_import (cpp_reader *reader, bool outermost)
19430 {
19431 gcc_assert (global_namespace == current_scope () && loadedness == ML_NONE);
19432
19433 loc = linemap_module_loc (line_table, loc, get_flatname ());
19434
19435 if (lazy_open >= lazy_limit)
19436 freeze_an_elf ();
19437
19438 int fd = -1;
19439 int e = ENOENT;
19440 if (filename)
19441 {
19442 const char *file = maybe_add_cmi_prefix (filename);
19443 dump () && dump ("CMI is %s", file);
19444 if (note_module_cmi_yes || inform_cmi_p)
19445 inform (loc, "reading CMI %qs", file);
19446 /* Add the CMI file to the dependency tracking. */
19447 if (cpp_get_deps (reader))
19448 deps_add_dep (cpp_get_deps (reader), file);
19449 fd = open (file, O_RDONLY | O_CLOEXEC | O_BINARY);
19450 e = errno;
19451 }
19452
19453 gcc_checking_assert (!slurp);
19454 slurp = new slurping (new elf_in (fd, e));
19455
19456 bool ok = true;
19457 if (!from ()->get_error ())
19458 {
19459 announce ("importing");
19460 loadedness = ML_CONFIG;
19461 lazy_open++;
19462 ok = read_initial (reader);
19463 slurp->lru = ++lazy_lru;
19464 }
19465
19466 gcc_assert (slurp->current == ~0u);
19467
19468 return check_read (outermost, ok);
19469 }
19470
19471 /* Attempt to increase the file descriptor limit. */
19472
19473 static bool
19474 try_increase_lazy (unsigned want)
19475 {
19476 gcc_checking_assert (lazy_open >= lazy_limit);
19477
19478 /* If we're increasing, saturate at hard limit. */
19479 if (want > lazy_hard_limit && lazy_limit < lazy_hard_limit)
19480 want = lazy_hard_limit;
19481
19482 #if HAVE_SETRLIMIT
19483 if ((!lazy_limit || !param_lazy_modules)
19484 && lazy_hard_limit
19485 && want <= lazy_hard_limit)
19486 {
19487 struct rlimit rlimit;
19488 rlimit.rlim_cur = want + LAZY_HEADROOM;
19489 rlimit.rlim_max = lazy_hard_limit + LAZY_HEADROOM;
19490 if (!setrlimit (RLIMIT_NOFILE, &rlimit))
19491 lazy_limit = want;
19492 }
19493 #endif
19494
19495 return lazy_open < lazy_limit;
19496 }
19497
19498 /* Pick a victim module to freeze its reader. */
19499
19500 void
19501 module_state::freeze_an_elf ()
19502 {
19503 if (try_increase_lazy (lazy_open * 2))
19504 return;
19505
19506 module_state *victim = NULL;
19507 for (unsigned ix = modules->length (); ix--;)
19508 {
19509 module_state *candidate = (*modules)[ix];
19510 if (candidate && candidate->slurp && candidate->slurp->lru
19511 && candidate->from ()->is_freezable ()
19512 && (!victim || victim->slurp->lru > candidate->slurp->lru))
19513 victim = candidate;
19514 }
19515
19516 if (victim)
19517 {
19518 dump () && dump ("Freezing '%s'", victim->filename);
19519 if (victim->slurp->macro_defs.size)
19520 /* Save the macro definitions to a buffer. */
19521 victim->from ()->preserve (victim->slurp->macro_defs);
19522 if (victim->slurp->macro_tbl.size)
19523 /* Save the macro definitions to a buffer. */
19524 victim->from ()->preserve (victim->slurp->macro_tbl);
19525 victim->from ()->freeze ();
19526 lazy_open--;
19527 }
19528 else
19529 dump () && dump ("No module available for freezing");
19530 }
19531
19532 /* Load the lazy slot *MSLOT, INDEX'th slot of the module. */
19533
19534 bool
19535 module_state::lazy_load (unsigned index, binding_slot *mslot)
19536 {
19537 unsigned n = dump.push (this);
19538
19539 gcc_checking_assert (function_depth);
19540
19541 unsigned cookie = mslot->get_lazy ();
19542 unsigned snum = cookie >> 2;
19543 dump () && dump ("Loading entity %M[%u] section:%u", this, index, snum);
19544
19545 bool ok = load_section (snum, mslot);
19546
19547 dump.pop (n);
19548
19549 return ok;
19550 }
19551
19552 /* Load MOD's binding for NS::ID into *MSLOT. *MSLOT contains the
19553 lazy cookie. OUTER is true if this is the outermost lazy, (used
19554 for diagnostics). */
19555
19556 void
19557 lazy_load_binding (unsigned mod, tree ns, tree id, binding_slot *mslot)
19558 {
19559 int count = errorcount + warningcount;
19560
19561 timevar_start (TV_MODULE_IMPORT);
19562
19563 /* Make sure lazy loading from a template context behaves as if
19564 from a non-template context. */
19565 processing_template_decl_sentinel ptds;
19566
19567 /* Stop GC happening, even in outermost loads (because our caller
19568 could well be building up a lookup set). */
19569 function_depth++;
19570
19571 gcc_checking_assert (mod);
19572 module_state *module = (*modules)[mod];
19573 unsigned n = dump.push (module);
19574
19575 unsigned snum = mslot->get_lazy ();
19576 dump () && dump ("Lazily binding %P@%N section:%u", ns, id,
19577 module->name, snum);
19578
19579 bool ok = !recursive_lazy (snum);
19580 if (ok)
19581 {
19582 ok = module->load_section (snum, mslot);
19583 lazy_snum = 0;
19584 post_load_processing ();
19585 }
19586
19587 dump.pop (n);
19588
19589 function_depth--;
19590
19591 timevar_stop (TV_MODULE_IMPORT);
19592
19593 if (!ok)
19594 fatal_error (input_location,
19595 module->is_header ()
19596 ? G_("failed to load binding %<%E%s%E%>")
19597 : G_("failed to load binding %<%E%s%E@%s%>"),
19598 ns, &"::"[ns == global_namespace ? 2 : 0], id,
19599 module->get_flatname ());
19600
19601 if (count != errorcount + warningcount)
19602 inform (input_location,
19603 module->is_header ()
19604 ? G_("during load of binding %<%E%s%E%>")
19605 : G_("during load of binding %<%E%s%E@%s%>"),
19606 ns, &"::"[ns == global_namespace ? 2 : 0], id,
19607 module->get_flatname ());
19608 }
19609
19610 /* Load any pending entities keyed to the top-key of DECL. */
19611
19612 void
19613 lazy_load_pendings (tree decl)
19614 {
19615 /* Make sure lazy loading from a template context behaves as if
19616 from a non-template context. */
19617 processing_template_decl_sentinel ptds;
19618
19619 tree key_decl;
19620 pending_key key;
19621 key.ns = find_pending_key (decl, &key_decl);
19622 key.id = DECL_NAME (key_decl);
19623
19624 auto *pending_vec = pending_table ? pending_table->get (key) : nullptr;
19625 if (!pending_vec)
19626 return;
19627
19628 int count = errorcount + warningcount;
19629
19630 timevar_start (TV_MODULE_IMPORT);
19631 bool ok = !recursive_lazy ();
19632 if (ok)
19633 {
19634 function_depth++; /* Prevent GC */
19635 unsigned n = dump.push (NULL);
19636 dump () && dump ("Reading %u pending entities keyed to %P",
19637 pending_vec->length (), key.ns, key.id);
19638 for (unsigned ix = pending_vec->length (); ix--;)
19639 {
19640 unsigned index = (*pending_vec)[ix];
19641 binding_slot *slot = &(*entity_ary)[index];
19642
19643 if (slot->is_lazy ())
19644 {
19645 module_state *import = import_entity_module (index);
19646 if (!import->lazy_load (index - import->entity_lwm, slot))
19647 ok = false;
19648 }
19649 else if (dump ())
19650 {
19651 module_state *import = import_entity_module (index);
19652 dump () && dump ("Entity %M[%u] already loaded",
19653 import, index - import->entity_lwm);
19654 }
19655 }
19656
19657 pending_table->remove (key);
19658 dump.pop (n);
19659 lazy_snum = 0;
19660 post_load_processing ();
19661 function_depth--;
19662 }
19663
19664 timevar_stop (TV_MODULE_IMPORT);
19665
19666 if (!ok)
19667 fatal_error (input_location, "failed to load pendings for %<%E%s%E%>",
19668 key.ns, &"::"[key.ns == global_namespace ? 2 : 0], key.id);
19669
19670 if (count != errorcount + warningcount)
19671 inform (input_location, "during load of pendings for %<%E%s%E%>",
19672 key.ns, &"::"[key.ns == global_namespace ? 2 : 0], key.id);
19673 }
19674
19675 static void
19676 direct_import (module_state *import, cpp_reader *reader)
19677 {
19678 timevar_start (TV_MODULE_IMPORT);
19679 unsigned n = dump.push (import);
19680
19681 gcc_checking_assert (import->is_direct () && import->has_location ());
19682 if (import->loadedness == ML_NONE)
19683 if (!import->do_import (reader, true))
19684 gcc_unreachable ();
19685
19686 if (import->loadedness < ML_LANGUAGE)
19687 {
19688 if (!keyed_table)
19689 keyed_table = new keyed_map_t (EXPERIMENT (1, 400));
19690 import->read_language (true);
19691 }
19692
19693 (*modules)[0]->set_import (import, import->exported_p);
19694
19695 dump.pop (n);
19696 timevar_stop (TV_MODULE_IMPORT);
19697 }
19698
19699 /* Import module IMPORT. */
19700
19701 void
19702 import_module (module_state *import, location_t from_loc, bool exporting_p,
19703 tree, cpp_reader *reader)
19704 {
19705 if (!import->check_not_purview (from_loc))
19706 return;
19707
19708 if (!import->is_header () && current_lang_depth ())
19709 /* Only header units should appear inside language
19710 specifications. The std doesn't specify this, but I think
19711 that's an error in resolving US 033, because language linkage
19712 is also our escape clause to getting things into the global
19713 module, so we don't want to confuse things by having to think
19714 about whether 'extern "C++" { import foo; }' puts foo's
19715 contents into the global module all of a sudden. */
19716 warning (0, "import of named module %qs inside language-linkage block",
19717 import->get_flatname ());
19718
19719 if (exporting_p || module_exporting_p ())
19720 import->exported_p = true;
19721
19722 if (import->loadedness != ML_NONE)
19723 {
19724 from_loc = ordinary_loc_of (line_table, from_loc);
19725 linemap_module_reparent (line_table, import->loc, from_loc);
19726 }
19727 gcc_checking_assert (!import->module_p);
19728 gcc_checking_assert (import->is_direct () && import->has_location ());
19729
19730 direct_import (import, reader);
19731 }
19732
19733 /* Declare the name of the current module to be NAME. EXPORTING_p is
19734 true if this TU is the exporting module unit. */
19735
19736 void
19737 declare_module (module_state *module, location_t from_loc, bool exporting_p,
19738 tree, cpp_reader *reader)
19739 {
19740 gcc_assert (global_namespace == current_scope ());
19741
19742 module_state *current = (*modules)[0];
19743 if (module_purview_p () || module->loadedness > ML_CONFIG)
19744 {
19745 error_at (from_loc, module_purview_p ()
19746 ? G_("module already declared")
19747 : G_("module already imported"));
19748 if (module_purview_p ())
19749 module = current;
19750 inform (module->loc, module_purview_p ()
19751 ? G_("module %qs declared here")
19752 : G_("module %qs imported here"),
19753 module->get_flatname ());
19754 return;
19755 }
19756
19757 gcc_checking_assert (module->module_p);
19758 gcc_checking_assert (module->is_direct () && module->has_location ());
19759
19760 /* Yer a module, 'arry. */
19761 module_kind = module->is_header () ? MK_HEADER : MK_NAMED | MK_ATTACH;
19762
19763 // Even in header units, we consider the decls to be purview
19764 module_kind |= MK_PURVIEW;
19765
19766 if (module->is_partition ())
19767 module_kind |= MK_PARTITION;
19768 if (exporting_p)
19769 {
19770 module->interface_p = true;
19771 module_kind |= MK_INTERFACE;
19772 }
19773
19774 if (module_has_cmi_p ())
19775 {
19776 /* Copy the importing information we may have already done. We
19777 do not need to separate out the imports that only happen in
19778 the GMF, inspite of what the literal wording of the std
19779 might imply. See p2191, the core list had a discussion
19780 where the module implementors agreed that the GMF of a named
19781 module is invisible to importers. */
19782 module->imports = current->imports;
19783
19784 module->mod = 0;
19785 (*modules)[0] = module;
19786 }
19787 else
19788 {
19789 module->interface_p = true;
19790 current->parent = module; /* So mangler knows module identity. */
19791 direct_import (module, reader);
19792 }
19793 }
19794
19795 /* Return true IFF we must emit a module global initializer function
19796 (which will be called by importers' init code). */
19797
19798 bool
19799 module_global_init_needed ()
19800 {
19801 return module_has_cmi_p () && !header_module_p ();
19802 }
19803
19804 /* Calculate which, if any, import initializers need calling. */
19805
19806 bool
19807 module_determine_import_inits ()
19808 {
19809 if (!modules || header_module_p ())
19810 return false;
19811
19812 /* Prune active_init_p. We need the same bitmap allocation
19813 scheme as for the imports member. */
19814 function_depth++; /* Disable GC. */
19815 bitmap covered_imports (BITMAP_GGC_ALLOC ());
19816
19817 bool any = false;
19818
19819 /* Because indirect imports are before their direct import, and
19820 we're scanning the array backwards, we only need one pass! */
19821 for (unsigned ix = modules->length (); --ix;)
19822 {
19823 module_state *import = (*modules)[ix];
19824
19825 if (!import->active_init_p)
19826 ;
19827 else if (bitmap_bit_p (covered_imports, ix))
19828 import->active_init_p = false;
19829 else
19830 {
19831 /* Everything this imports is therefore handled by its
19832 initializer, so doesn't need initializing by us. */
19833 bitmap_ior_into (covered_imports, import->imports);
19834 any = true;
19835 }
19836 }
19837 function_depth--;
19838
19839 return any;
19840 }
19841
19842 /* Emit calls to each direct import's global initializer. Including
19843 direct imports of directly imported header units. The initializers
19844 of (static) entities in header units will be called by their
19845 importing modules (for the instance contained within that), or by
19846 the current TU (for the instances we've brought in). Of course
19847 such header unit behaviour is evil, but iostream went through that
19848 door some time ago. */
19849
19850 void
19851 module_add_import_initializers ()
19852 {
19853 if (!modules || header_module_p ())
19854 return;
19855
19856 tree fntype = build_function_type (void_type_node, void_list_node);
19857 releasing_vec args; // There are no args
19858
19859 for (unsigned ix = modules->length (); --ix;)
19860 {
19861 module_state *import = (*modules)[ix];
19862 if (import->active_init_p)
19863 {
19864 tree name = mangle_module_global_init (ix);
19865 tree fndecl = build_lang_decl (FUNCTION_DECL, name, fntype);
19866
19867 DECL_CONTEXT (fndecl) = FROB_CONTEXT (global_namespace);
19868 SET_DECL_ASSEMBLER_NAME (fndecl, name);
19869 TREE_PUBLIC (fndecl) = true;
19870 determine_visibility (fndecl);
19871
19872 tree call = cp_build_function_call_vec (fndecl, &args,
19873 tf_warning_or_error);
19874 finish_expr_stmt (call);
19875 }
19876 }
19877 }
19878
19879 /* NAME & LEN are a preprocessed header name, possibly including the
19880 surrounding "" or <> characters. Return the raw string name of the
19881 module to which it refers. This will be an absolute path, or begin
19882 with ./, so it is immediately distinguishable from a (non-header
19883 unit) module name. If READER is non-null, ask the preprocessor to
19884 locate the header to which it refers using the appropriate include
19885 path. Note that we do never do \ processing of the string, as that
19886 matches the preprocessor's behaviour. */
19887
19888 static const char *
19889 canonicalize_header_name (cpp_reader *reader, location_t loc, bool unquoted,
19890 const char *str, size_t &len_r)
19891 {
19892 size_t len = len_r;
19893 static char *buf = 0;
19894 static size_t alloc = 0;
19895
19896 if (!unquoted)
19897 {
19898 gcc_checking_assert (len >= 2
19899 && ((reader && str[0] == '<' && str[len-1] == '>')
19900 || (str[0] == '"' && str[len-1] == '"')));
19901 str += 1;
19902 len -= 2;
19903 }
19904
19905 if (reader)
19906 {
19907 gcc_assert (!unquoted);
19908
19909 if (len >= alloc)
19910 {
19911 alloc = len + 1;
19912 buf = XRESIZEVEC (char, buf, alloc);
19913 }
19914 memcpy (buf, str, len);
19915 buf[len] = 0;
19916
19917 if (const char *hdr
19918 = cpp_probe_header_unit (reader, buf, str[-1] == '<', loc))
19919 {
19920 len = strlen (hdr);
19921 str = hdr;
19922 }
19923 else
19924 str = buf;
19925 }
19926
19927 if (!(str[0] == '.' ? IS_DIR_SEPARATOR (str[1]) : IS_ABSOLUTE_PATH (str)))
19928 {
19929 /* Prepend './' */
19930 if (len + 3 > alloc)
19931 {
19932 alloc = len + 3;
19933 buf = XRESIZEVEC (char, buf, alloc);
19934 }
19935
19936 buf[0] = '.';
19937 buf[1] = DIR_SEPARATOR;
19938 memmove (buf + 2, str, len);
19939 len += 2;
19940 buf[len] = 0;
19941 str = buf;
19942 }
19943
19944 len_r = len;
19945 return str;
19946 }
19947
19948 /* Set the CMI name from a cody packet. Issue an error if
19949 ill-formed. */
19950
19951 void module_state::set_filename (const Cody::Packet &packet)
19952 {
19953 gcc_checking_assert (!filename);
19954 if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19955 filename = xstrdup (packet.GetString ().c_str ());
19956 else
19957 {
19958 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19959 error_at (loc, "unknown Compiled Module Interface: %s",
19960 packet.GetString ().c_str ());
19961 }
19962 }
19963
19964 /* Figure out whether to treat HEADER as an include or an import. */
19965
19966 static char *
19967 maybe_translate_include (cpp_reader *reader, line_maps *lmaps, location_t loc,
19968 const char *path)
19969 {
19970 if (!modules_p ())
19971 {
19972 /* Turn off. */
19973 cpp_get_callbacks (reader)->translate_include = NULL;
19974 return nullptr;
19975 }
19976
19977 if (!spans.init_p ())
19978 /* Before the main file, don't divert. */
19979 return nullptr;
19980
19981 dump.push (NULL);
19982
19983 dump () && dump ("Checking include translation '%s'", path);
19984 auto *mapper = get_mapper (cpp_main_loc (reader), cpp_get_deps (reader));
19985
19986 size_t len = strlen (path);
19987 path = canonicalize_header_name (NULL, loc, true, path, len);
19988 auto packet = mapper->IncludeTranslate (path, Cody::Flags::None, len);
19989 int xlate = false;
19990 if (packet.GetCode () == Cody::Client::PC_BOOL)
19991 xlate = -int (packet.GetInteger ());
19992 else if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19993 {
19994 /* Record the CMI name for when we do the import. */
19995 module_state *import = get_module (build_string (len, path));
19996 import->set_filename (packet);
19997 xlate = +1;
19998 }
19999 else
20000 {
20001 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
20002 error_at (loc, "cannot determine %<#include%> translation of %s: %s",
20003 path, packet.GetString ().c_str ());
20004 }
20005
20006 bool note = false;
20007 if (note_include_translate_yes && xlate > 1)
20008 note = true;
20009 else if (note_include_translate_no && xlate == 0)
20010 note = true;
20011 else if (note_includes)
20012 /* We do not expect the note_includes vector to be large, so O(N)
20013 iteration. */
20014 for (unsigned ix = note_includes->length (); !note && ix--;)
20015 if (!strcmp ((*note_includes)[ix], path))
20016 note = true;
20017
20018 if (note)
20019 inform (loc, xlate
20020 ? G_("include %qs translated to import")
20021 : G_("include %qs processed textually") , path);
20022
20023 dump () && dump (xlate ? "Translating include to import"
20024 : "Keeping include as include");
20025 dump.pop (0);
20026
20027 if (!(xlate > 0))
20028 return nullptr;
20029
20030 /* Create the translation text. */
20031 loc = ordinary_loc_of (lmaps, loc);
20032 const line_map_ordinary *map
20033 = linemap_check_ordinary (linemap_lookup (lmaps, loc));
20034 unsigned col = SOURCE_COLUMN (map, loc);
20035 col -= (col != 0); /* Columns are 1-based. */
20036
20037 unsigned alloc = len + col + 60;
20038 char *res = XNEWVEC (char, alloc);
20039
20040 strcpy (res, "__import");
20041 unsigned actual = 8;
20042 if (col > actual)
20043 {
20044 /* Pad out so the filename appears at the same position. */
20045 memset (res + actual, ' ', col - actual);
20046 actual = col;
20047 }
20048 /* No need to encode characters, that's not how header names are
20049 handled. */
20050 actual += snprintf (res + actual, alloc - actual,
20051 "\"%s\" [[__translated]];\n", path);
20052 gcc_checking_assert (actual < alloc);
20053
20054 /* cpplib will delete the buffer. */
20055 return res;
20056 }
20057
20058 static void
20059 begin_header_unit (cpp_reader *reader)
20060 {
20061 /* Set the module header name from the main_input_filename. */
20062 const char *main = main_input_filename;
20063 size_t len = strlen (main);
20064 main = canonicalize_header_name (NULL, 0, true, main, len);
20065 module_state *module = get_module (build_string (len, main));
20066
20067 preprocess_module (module, cpp_main_loc (reader), false, false, true, reader);
20068 }
20069
20070 /* We've just properly entered the main source file. I.e. after the
20071 command line, builtins and forced headers. Record the line map and
20072 location of this map. Note we may be called more than once. The
20073 first call sticks. */
20074
20075 void
20076 module_begin_main_file (cpp_reader *reader, line_maps *lmaps,
20077 const line_map_ordinary *map)
20078 {
20079 gcc_checking_assert (lmaps == line_table);
20080 if (modules_p () && !spans.init_p ())
20081 {
20082 unsigned n = dump.push (NULL);
20083 spans.init (lmaps, map);
20084 dump.pop (n);
20085 if (flag_header_unit && !cpp_get_options (reader)->preprocessed)
20086 {
20087 /* Tell the preprocessor this is an include file. */
20088 cpp_retrofit_as_include (reader);
20089 begin_header_unit (reader);
20090 }
20091 }
20092 }
20093
20094 /* Process the pending_import queue, making sure we know the
20095 filenames. */
20096
20097 static void
20098 name_pending_imports (cpp_reader *reader)
20099 {
20100 auto *mapper = get_mapper (cpp_main_loc (reader), cpp_get_deps (reader));
20101
20102 if (!vec_safe_length (pending_imports))
20103 /* Not doing anything. */
20104 return;
20105
20106 timevar_start (TV_MODULE_MAPPER);
20107
20108 auto n = dump.push (NULL);
20109 dump () && dump ("Resolving direct import names");
20110 bool want_deps = (bool (mapper->get_flags () & Cody::Flags::NameOnly)
20111 || cpp_get_deps (reader));
20112 bool any = false;
20113
20114 for (unsigned ix = 0; ix != pending_imports->length (); ix++)
20115 {
20116 module_state *module = (*pending_imports)[ix];
20117 gcc_checking_assert (module->is_direct ());
20118 if (!module->filename && !module->visited_p)
20119 {
20120 bool export_p = (module->module_p
20121 && (module->is_partition () || module->exported_p));
20122
20123 Cody::Flags flags = Cody::Flags::None;
20124 if (flag_preprocess_only
20125 && !(module->is_header () && !export_p))
20126 {
20127 if (!want_deps)
20128 continue;
20129 flags = Cody::Flags::NameOnly;
20130 }
20131
20132 if (!any)
20133 {
20134 any = true;
20135 mapper->Cork ();
20136 }
20137 if (export_p)
20138 mapper->ModuleExport (module->get_flatname (), flags);
20139 else
20140 mapper->ModuleImport (module->get_flatname (), flags);
20141 module->visited_p = true;
20142 }
20143 }
20144
20145 if (any)
20146 {
20147 auto response = mapper->Uncork ();
20148 auto r_iter = response.begin ();
20149 for (unsigned ix = 0; ix != pending_imports->length (); ix++)
20150 {
20151 module_state *module = (*pending_imports)[ix];
20152 if (module->visited_p)
20153 {
20154 module->visited_p = false;
20155 gcc_checking_assert (!module->filename);
20156
20157 module->set_filename (*r_iter);
20158 ++r_iter;
20159 }
20160 }
20161 }
20162
20163 dump.pop (n);
20164
20165 timevar_stop (TV_MODULE_MAPPER);
20166 }
20167
20168 /* We've just lexed a module-specific control line for MODULE. Mark
20169 the module as a direct import, and possibly load up its macro
20170 state. Returns the primary module, if this is a module
20171 declaration. */
20172 /* Perhaps we should offer a preprocessing mode where we read the
20173 directives from the header unit, rather than require the header's
20174 CMI. */
20175
20176 module_state *
20177 preprocess_module (module_state *module, location_t from_loc,
20178 bool in_purview, bool is_import, bool is_export,
20179 cpp_reader *reader)
20180 {
20181 if (!is_import)
20182 {
20183 if (module->loc)
20184 /* It's already been mentioned, so ignore its module-ness. */
20185 is_import = true;
20186 else
20187 {
20188 /* Record it is the module. */
20189 module->module_p = true;
20190 if (is_export)
20191 {
20192 module->exported_p = true;
20193 module->interface_p = true;
20194 }
20195 }
20196 }
20197
20198 if (module->directness < MD_DIRECT + in_purview)
20199 {
20200 /* Mark as a direct import. */
20201 module->directness = module_directness (MD_DIRECT + in_purview);
20202
20203 /* Set the location to be most informative for users. */
20204 from_loc = ordinary_loc_of (line_table, from_loc);
20205 if (module->loadedness != ML_NONE)
20206 linemap_module_reparent (line_table, module->loc, from_loc);
20207 else
20208 {
20209 module->loc = from_loc;
20210 if (!module->flatname)
20211 module->set_flatname ();
20212 }
20213 }
20214
20215 auto desired = ML_CONFIG;
20216 if (is_import
20217 && module->is_header ()
20218 && (!cpp_get_options (reader)->preprocessed
20219 || cpp_get_options (reader)->directives_only))
20220 /* We need preprocessor state now. */
20221 desired = ML_PREPROCESSOR;
20222
20223 if (!is_import || module->loadedness < desired)
20224 {
20225 vec_safe_push (pending_imports, module);
20226
20227 if (desired == ML_PREPROCESSOR)
20228 {
20229 unsigned n = dump.push (NULL);
20230
20231 dump () && dump ("Reading %M preprocessor state", module);
20232 name_pending_imports (reader);
20233
20234 /* Preserve the state of the line-map. */
20235 unsigned pre_hwm = LINEMAPS_ORDINARY_USED (line_table);
20236
20237 /* We only need to close the span, if we're going to emit a
20238 CMI. But that's a little tricky -- our token scanner
20239 needs to be smarter -- and this isn't much state.
20240 Remember, we've not parsed anything at this point, so
20241 our module state flags are inadequate. */
20242 spans.maybe_init ();
20243 spans.close ();
20244
20245 timevar_start (TV_MODULE_IMPORT);
20246
20247 /* Load the config of each pending import -- we must assign
20248 module numbers monotonically. */
20249 for (unsigned ix = 0; ix != pending_imports->length (); ix++)
20250 {
20251 auto *import = (*pending_imports)[ix];
20252 if (!(import->module_p
20253 && (import->is_partition () || import->exported_p))
20254 && import->loadedness == ML_NONE
20255 && (import->is_header () || !flag_preprocess_only))
20256 {
20257 unsigned n = dump.push (import);
20258 import->do_import (reader, true);
20259 dump.pop (n);
20260 }
20261 }
20262 vec_free (pending_imports);
20263
20264 /* Restore the line-map state. */
20265 spans.open (linemap_module_restore (line_table, pre_hwm));
20266
20267 /* Now read the preprocessor state of this particular
20268 import. */
20269 if (module->loadedness == ML_CONFIG
20270 && module->read_preprocessor (true))
20271 module->import_macros ();
20272
20273 timevar_stop (TV_MODULE_IMPORT);
20274
20275 dump.pop (n);
20276 }
20277 }
20278
20279 return is_import ? NULL : get_primary (module);
20280 }
20281
20282 /* We've completed phase-4 translation. Emit any dependency
20283 information for the not-yet-loaded direct imports, and fill in
20284 their file names. We'll have already loaded up the direct header
20285 unit wavefront. */
20286
20287 void
20288 preprocessed_module (cpp_reader *reader)
20289 {
20290 unsigned n = dump.push (NULL);
20291
20292 dump () && dump ("Completed phase-4 (tokenization) processing");
20293
20294 name_pending_imports (reader);
20295 vec_free (pending_imports);
20296
20297 spans.maybe_init ();
20298 spans.close ();
20299
20300 using iterator = hash_table<module_state_hash>::iterator;
20301 if (mkdeps *deps = cpp_get_deps (reader))
20302 {
20303 /* Walk the module hash, informing the dependency machinery. */
20304 iterator end = modules_hash->end ();
20305 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
20306 {
20307 module_state *module = *iter;
20308
20309 if (module->is_direct ())
20310 {
20311 if (module->is_module ()
20312 && (module->is_interface () || module->is_partition ()))
20313 deps_add_module_target (deps, module->get_flatname (),
20314 maybe_add_cmi_prefix (module->filename),
20315 module->is_header (),
20316 module->is_exported ());
20317 else
20318 deps_add_module_dep (deps, module->get_flatname ());
20319 }
20320 }
20321 }
20322
20323 if (flag_header_unit && !flag_preprocess_only)
20324 {
20325 /* Find the main module -- remember, it's not yet in the module
20326 array. */
20327 iterator end = modules_hash->end ();
20328 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
20329 {
20330 module_state *module = *iter;
20331 if (module->is_module ())
20332 {
20333 declare_module (module, cpp_main_loc (reader), true, NULL, reader);
20334 module_kind |= MK_EXPORTING;
20335 break;
20336 }
20337 }
20338 }
20339
20340 dump.pop (n);
20341 }
20342
20343 /* VAL is a global tree, add it to the global vec if it is
20344 interesting. Add some of its targets, if they too are
20345 interesting. We do not add identifiers, as they can be re-found
20346 via the identifier hash table. There is a cost to the number of
20347 global trees. */
20348
20349 static int
20350 maybe_add_global (tree val, unsigned &crc)
20351 {
20352 int v = 0;
20353
20354 if (val && !(identifier_p (val) || TREE_VISITED (val)))
20355 {
20356 TREE_VISITED (val) = true;
20357 crc = crc32_unsigned (crc, fixed_trees->length ());
20358 vec_safe_push (fixed_trees, val);
20359 v++;
20360
20361 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPED))
20362 v += maybe_add_global (TREE_TYPE (val), crc);
20363 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPE_COMMON))
20364 v += maybe_add_global (TYPE_NAME (val), crc);
20365 }
20366
20367 return v;
20368 }
20369
20370 /* Initialize module state. Create the hash table, determine the
20371 global trees. Create the module for current TU. */
20372
20373 void
20374 init_modules (cpp_reader *reader)
20375 {
20376 /* PCH should not be reachable because of lang-specs, but the
20377 user could have overriden that. */
20378 if (pch_file)
20379 fatal_error (input_location,
20380 "C++ modules are incompatible with precompiled headers");
20381
20382 if (cpp_get_options (reader)->traditional)
20383 fatal_error (input_location,
20384 "C++ modules are incompatible with traditional preprocessing");
20385
20386 if (flag_preprocess_only)
20387 {
20388 cpp_options *cpp_opts = cpp_get_options (reader);
20389 if (flag_no_output
20390 || (cpp_opts->deps.style != DEPS_NONE
20391 && !cpp_opts->deps.need_preprocessor_output))
20392 {
20393 warning (0, flag_dump_macros == 'M'
20394 ? G_("macro debug output may be incomplete with modules")
20395 : G_("module dependencies require preprocessing"));
20396 if (cpp_opts->deps.style != DEPS_NONE)
20397 inform (input_location, "you should use the %<-%s%> option",
20398 cpp_opts->deps.style == DEPS_SYSTEM ? "MD" : "MMD");
20399 }
20400 }
20401
20402 /* :: is always exported. */
20403 DECL_MODULE_EXPORT_P (global_namespace) = true;
20404
20405 modules_hash = hash_table<module_state_hash>::create_ggc (31);
20406 vec_safe_reserve (modules, 20);
20407
20408 /* Create module for current TU. */
20409 module_state *current
20410 = new (ggc_alloc<module_state> ()) module_state (NULL_TREE, NULL, false);
20411 current->mod = 0;
20412 bitmap_set_bit (current->imports, 0);
20413 modules->quick_push (current);
20414
20415 gcc_checking_assert (!fixed_trees);
20416
20417 headers = BITMAP_GGC_ALLOC ();
20418
20419 if (note_includes)
20420 /* Canonicalize header names. */
20421 for (unsigned ix = 0; ix != note_includes->length (); ix++)
20422 {
20423 const char *hdr = (*note_includes)[ix];
20424 size_t len = strlen (hdr);
20425
20426 bool system = hdr[0] == '<';
20427 bool user = hdr[0] == '"';
20428 bool delimed = system || user;
20429
20430 if (len <= (delimed ? 2 : 0)
20431 || (delimed && hdr[len-1] != (system ? '>' : '"')))
20432 error ("invalid header name %qs", hdr);
20433
20434 hdr = canonicalize_header_name (delimed ? reader : NULL,
20435 0, !delimed, hdr, len);
20436 char *path = XNEWVEC (char, len + 1);
20437 memcpy (path, hdr, len);
20438 path[len] = 0;
20439
20440 (*note_includes)[ix] = path;
20441 }
20442
20443 if (note_cmis)
20444 /* Canonicalize & mark module names. */
20445 for (unsigned ix = 0; ix != note_cmis->length (); ix++)
20446 {
20447 const char *name = (*note_cmis)[ix];
20448 size_t len = strlen (name);
20449
20450 bool is_system = name[0] == '<';
20451 bool is_user = name[0] == '"';
20452 bool is_pathname = false;
20453 if (!(is_system || is_user))
20454 for (unsigned ix = len; !is_pathname && ix--;)
20455 is_pathname = IS_DIR_SEPARATOR (name[ix]);
20456 if (is_system || is_user || is_pathname)
20457 {
20458 if (len <= (is_pathname ? 0 : 2)
20459 || (!is_pathname && name[len-1] != (is_system ? '>' : '"')))
20460 {
20461 error ("invalid header name %qs", name);
20462 continue;
20463 }
20464 else
20465 name = canonicalize_header_name (is_pathname ? nullptr : reader,
20466 0, is_pathname, name, len);
20467 }
20468 if (auto module = get_module (name))
20469 module->inform_cmi_p = 1;
20470 else
20471 error ("invalid module name %qs", name);
20472 }
20473
20474 dump.push (NULL);
20475
20476 /* Determine lazy handle bound. */
20477 {
20478 unsigned limit = 1000;
20479 #if HAVE_GETRLIMIT
20480 struct rlimit rlimit;
20481 if (!getrlimit (RLIMIT_NOFILE, &rlimit))
20482 {
20483 lazy_hard_limit = (rlimit.rlim_max < 1000000
20484 ? unsigned (rlimit.rlim_max) : 1000000);
20485 lazy_hard_limit = (lazy_hard_limit > LAZY_HEADROOM
20486 ? lazy_hard_limit - LAZY_HEADROOM : 0);
20487 if (rlimit.rlim_cur < limit)
20488 limit = unsigned (rlimit.rlim_cur);
20489 }
20490 #endif
20491 limit = limit > LAZY_HEADROOM ? limit - LAZY_HEADROOM : 1;
20492
20493 if (unsigned parm = param_lazy_modules)
20494 {
20495 if (parm <= limit || !lazy_hard_limit || !try_increase_lazy (parm))
20496 lazy_limit = parm;
20497 }
20498 else
20499 lazy_limit = limit;
20500 }
20501
20502 if (dump ())
20503 {
20504 verstr_t ver;
20505 version2string (MODULE_VERSION, ver);
20506 dump ("Source: %s", main_input_filename);
20507 dump ("Compiler: %s", version_string);
20508 dump ("Modules: %s", ver);
20509 dump ("Checking: %s",
20510 #if CHECKING_P
20511 "checking"
20512 #elif ENABLE_ASSERT_CHECKING
20513 "asserting"
20514 #else
20515 "release"
20516 #endif
20517 );
20518 dump ("Compiled by: "
20519 #ifdef __GNUC__
20520 "GCC %d.%d, %s", __GNUC__, __GNUC_MINOR__,
20521 #ifdef __OPTIMIZE__
20522 "optimizing"
20523 #else
20524 "not optimizing"
20525 #endif
20526 #else
20527 "not GCC"
20528 #endif
20529 );
20530 dump ("Reading: %s", MAPPED_READING ? "mmap" : "fileio");
20531 dump ("Writing: %s", MAPPED_WRITING ? "mmap" : "fileio");
20532 dump ("Lazy limit: %u", lazy_limit);
20533 dump ("Lazy hard limit: %u", lazy_hard_limit);
20534 dump ("");
20535 }
20536
20537 /* Construct the global tree array. This is an array of unique
20538 global trees (& types). Do this now, rather than lazily, as
20539 some global trees are lazily created and we don't want that to
20540 mess with our syndrome of fixed trees. */
20541 unsigned crc = 0;
20542 vec_alloc (fixed_trees, 250);
20543
20544 dump () && dump ("+Creating globals");
20545 /* Insert the TRANSLATION_UNIT_DECL. */
20546 TREE_VISITED (DECL_CONTEXT (global_namespace)) = true;
20547 fixed_trees->quick_push (DECL_CONTEXT (global_namespace));
20548 for (unsigned jx = 0; global_tree_arys[jx].first; jx++)
20549 {
20550 const tree *ptr = global_tree_arys[jx].first;
20551 unsigned limit = global_tree_arys[jx].second;
20552
20553 for (unsigned ix = 0; ix != limit; ix++, ptr++)
20554 {
20555 !(ix & 31) && dump ("") && dump ("+\t%u:%u:", jx, ix);
20556 unsigned v = maybe_add_global (*ptr, crc);
20557 dump () && dump ("+%u", v);
20558 }
20559 }
20560 /* OS- and machine-specific types are dynamically registered at
20561 runtime, so cannot be part of global_tree_arys. */
20562 registered_builtin_types && dump ("") && dump ("+\tB:");
20563 for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
20564 {
20565 unsigned v = maybe_add_global (TREE_VALUE (t), crc);
20566 dump () && dump ("+%u", v);
20567 }
20568 global_crc = crc32_unsigned (crc, fixed_trees->length ());
20569 dump ("") && dump ("Created %u unique globals, crc=%x",
20570 fixed_trees->length (), global_crc);
20571 for (unsigned ix = fixed_trees->length (); ix--;)
20572 TREE_VISITED ((*fixed_trees)[ix]) = false;
20573
20574 dump.pop (0);
20575
20576 if (!flag_module_lazy)
20577 /* Get the mapper now, if we're not being lazy. */
20578 get_mapper (cpp_main_loc (reader), cpp_get_deps (reader));
20579
20580 if (!flag_preprocess_only)
20581 {
20582 pending_table = new pending_map_t (EXPERIMENT (1, 400));
20583 entity_map = new entity_map_t (EXPERIMENT (1, 400));
20584 vec_safe_reserve (entity_ary, EXPERIMENT (1, 400));
20585 imported_temploid_friends
20586 = decl_tree_cache_map::create_ggc (EXPERIMENT (1, 400));
20587 }
20588
20589 #if CHECKING_P
20590 note_defs = note_defs_table_t::create_ggc (1000);
20591 #endif
20592
20593 if (flag_header_unit && cpp_get_options (reader)->preprocessed)
20594 begin_header_unit (reader);
20595
20596 /* Collect here to make sure things are tagged correctly (when
20597 aggressively GC'd). */
20598 ggc_collect ();
20599 }
20600
20601 /* If NODE is a deferred macro, load it. */
20602
20603 static int
20604 load_macros (cpp_reader *reader, cpp_hashnode *node, void *)
20605 {
20606 location_t main_loc
20607 = MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0));
20608
20609 if (cpp_user_macro_p (node)
20610 && !node->value.macro)
20611 {
20612 cpp_macro *macro = cpp_get_deferred_macro (reader, node, main_loc);
20613 dump () && dump ("Loaded macro #%s %I",
20614 macro ? "define" : "undef", identifier (node));
20615 }
20616
20617 return 1;
20618 }
20619
20620 /* At the end of tokenizing, we no longer need the macro tables of
20621 imports. But the user might have requested some checking. */
20622
20623 void
20624 maybe_check_all_macros (cpp_reader *reader)
20625 {
20626 if (!warn_imported_macros)
20627 return;
20628
20629 /* Force loading of any remaining deferred macros. This will
20630 produce diagnostics if they are ill-formed. */
20631 unsigned n = dump.push (NULL);
20632 cpp_forall_identifiers (reader, load_macros, NULL);
20633 dump.pop (n);
20634 }
20635
20636 // State propagated from finish_module_processing to fini_modules
20637
20638 struct module_processing_cookie
20639 {
20640 elf_out out;
20641 module_state_config config;
20642 char *cmi_name;
20643 char *tmp_name;
20644 unsigned crc;
20645 bool began;
20646
20647 module_processing_cookie (char *cmi, char *tmp, int fd, int e)
20648 : out (fd, e), cmi_name (cmi), tmp_name (tmp), crc (0), began (false)
20649 {
20650 }
20651 ~module_processing_cookie ()
20652 {
20653 XDELETEVEC (tmp_name);
20654 XDELETEVEC (cmi_name);
20655 }
20656 };
20657
20658 /* Write the CMI, if we're a module interface. */
20659
20660 void *
20661 finish_module_processing (cpp_reader *reader)
20662 {
20663 module_processing_cookie *cookie = nullptr;
20664
20665 if (header_module_p ())
20666 module_kind &= ~MK_EXPORTING;
20667
20668 if (!modules || !(*modules)[0]->name)
20669 {
20670 if (flag_module_only)
20671 warning (0, "%<-fmodule-only%> used for non-interface");
20672 }
20673 else if (!flag_syntax_only)
20674 {
20675 int fd = -1;
20676 int e = -1;
20677
20678 timevar_start (TV_MODULE_EXPORT);
20679
20680 /* Force a valid but empty line map at the end. This simplifies
20681 the line table preparation and writing logic. */
20682 linemap_add (line_table, LC_ENTER, false, "", 0);
20683
20684 /* We write to a tmpname, and then atomically rename. */
20685 char *cmi_name = NULL;
20686 char *tmp_name = NULL;
20687 module_state *state = (*modules)[0];
20688
20689 unsigned n = dump.push (state);
20690 state->announce ("creating");
20691 if (state->filename)
20692 {
20693 size_t len = 0;
20694 cmi_name = xstrdup (maybe_add_cmi_prefix (state->filename, &len));
20695 tmp_name = XNEWVEC (char, len + 3);
20696 memcpy (tmp_name, cmi_name, len);
20697 strcpy (&tmp_name[len], "~");
20698
20699 if (!errorcount)
20700 for (unsigned again = 2; ; again--)
20701 {
20702 fd = open (tmp_name,
20703 O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC | O_BINARY,
20704 S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH);
20705 e = errno;
20706 if (fd >= 0 || !again || e != ENOENT)
20707 break;
20708 create_dirs (tmp_name);
20709 }
20710 if (note_module_cmi_yes || state->inform_cmi_p)
20711 inform (state->loc, "writing CMI %qs", cmi_name);
20712 dump () && dump ("CMI is %s", cmi_name);
20713 }
20714
20715 cookie = new module_processing_cookie (cmi_name, tmp_name, fd, e);
20716
20717 if (errorcount)
20718 warning_at (state->loc, 0, "not writing module %qs due to errors",
20719 state->get_flatname ());
20720 else if (cookie->out.begin ())
20721 {
20722 cookie->began = true;
20723 auto loc = input_location;
20724 /* So crashes finger-point the module decl. */
20725 input_location = state->loc;
20726 state->write_begin (&cookie->out, reader, cookie->config, cookie->crc);
20727 input_location = loc;
20728 }
20729
20730 dump.pop (n);
20731 timevar_stop (TV_MODULE_EXPORT);
20732
20733 ggc_collect ();
20734 }
20735
20736 if (modules)
20737 {
20738 unsigned n = dump.push (NULL);
20739 dump () && dump ("Imported %u modules", modules->length () - 1);
20740 dump () && dump ("Containing %u clusters", available_clusters);
20741 dump () && dump ("Loaded %u clusters (%u%%)", loaded_clusters,
20742 (loaded_clusters * 100 + available_clusters / 2) /
20743 (available_clusters + !available_clusters));
20744 dump.pop (n);
20745 }
20746
20747 return cookie;
20748 }
20749
20750 // Do the final emission of a module. At this point we know whether
20751 // the module static initializer is a NOP or not.
20752
20753 static void
20754 late_finish_module (cpp_reader *reader, module_processing_cookie *cookie,
20755 bool init_fn_non_empty)
20756 {
20757 timevar_start (TV_MODULE_EXPORT);
20758
20759 module_state *state = (*modules)[0];
20760 unsigned n = dump.push (state);
20761 state->announce ("finishing");
20762
20763 cookie->config.active_init = init_fn_non_empty;
20764 if (cookie->began)
20765 state->write_end (&cookie->out, reader, cookie->config, cookie->crc);
20766
20767 if (cookie->out.end () && cookie->cmi_name)
20768 {
20769 /* Some OS's do not replace NEWNAME if it already exists.
20770 This'll have a race condition in erroneous concurrent
20771 builds. */
20772 unlink (cookie->cmi_name);
20773 if (rename (cookie->tmp_name, cookie->cmi_name))
20774 {
20775 dump () && dump ("Rename ('%s','%s') errno=%u",
20776 cookie->tmp_name, cookie->cmi_name, errno);
20777 cookie->out.set_error (errno);
20778 }
20779 }
20780
20781 if (cookie->out.get_error () && cookie->began)
20782 {
20783 error_at (state->loc, "failed to write compiled module: %s",
20784 cookie->out.get_error (state->filename));
20785 state->note_cmi_name ();
20786 }
20787
20788 if (!errorcount)
20789 {
20790 auto *mapper = get_mapper (cpp_main_loc (reader), cpp_get_deps (reader));
20791 mapper->ModuleCompiled (state->get_flatname ());
20792 }
20793 else if (cookie->cmi_name)
20794 {
20795 /* We failed, attempt to erase all evidence we even tried. */
20796 unlink (cookie->tmp_name);
20797 unlink (cookie->cmi_name);
20798 }
20799
20800 delete cookie;
20801 dump.pop (n);
20802 timevar_stop (TV_MODULE_EXPORT);
20803 }
20804
20805 void
20806 fini_modules (cpp_reader *reader, void *cookie, bool has_inits)
20807 {
20808 if (cookie)
20809 late_finish_module (reader,
20810 static_cast<module_processing_cookie *> (cookie),
20811 has_inits);
20812
20813 /* We're done with the macro tables now. */
20814 vec_free (macro_exports);
20815 vec_free (macro_imports);
20816 headers = NULL;
20817
20818 /* We're now done with everything but the module names. */
20819 set_cmi_repo (NULL);
20820 if (mapper)
20821 {
20822 timevar_start (TV_MODULE_MAPPER);
20823 module_client::close_module_client (0, mapper);
20824 mapper = nullptr;
20825 timevar_stop (TV_MODULE_MAPPER);
20826 }
20827 module_state_config::release ();
20828
20829 #if CHECKING_P
20830 note_defs = NULL;
20831 #endif
20832
20833 if (modules)
20834 for (unsigned ix = modules->length (); --ix;)
20835 if (module_state *state = (*modules)[ix])
20836 state->release ();
20837
20838 /* No need to lookup modules anymore. */
20839 modules_hash = NULL;
20840
20841 /* Or entity array. We still need the entity map to find import numbers. */
20842 vec_free (entity_ary);
20843 entity_ary = NULL;
20844
20845 /* Or remember any pending entities. */
20846 delete pending_table;
20847 pending_table = NULL;
20848
20849 /* Or any keys -- Let it go! */
20850 delete keyed_table;
20851 keyed_table = NULL;
20852
20853 /* Allow a GC, we've possibly made much data unreachable. */
20854 ggc_collect ();
20855 }
20856
20857 /* If CODE is a module option, handle it & return true. Otherwise
20858 return false. For unknown reasons I cannot get the option
20859 generation machinery to set fmodule-mapper or -fmodule-header to
20860 make a string type option variable. */
20861
20862 bool
20863 handle_module_option (unsigned code, const char *str, int)
20864 {
20865 auto hdr = CMS_header;
20866
20867 switch (opt_code (code))
20868 {
20869 case OPT_fmodule_mapper_:
20870 module_mapper_name = str;
20871 return true;
20872
20873 case OPT_fmodule_header_:
20874 {
20875 if (!strcmp (str, "user"))
20876 hdr = CMS_user;
20877 else if (!strcmp (str, "system"))
20878 hdr = CMS_system;
20879 else
20880 error ("unknown header kind %qs", str);
20881 }
20882 /* Fallthrough. */
20883
20884 case OPT_fmodule_header:
20885 flag_header_unit = hdr;
20886 flag_modules = 1;
20887 return true;
20888
20889 case OPT_flang_info_include_translate_:
20890 vec_safe_push (note_includes, str);
20891 return true;
20892
20893 case OPT_flang_info_module_cmi_:
20894 vec_safe_push (note_cmis, str);
20895 return true;
20896
20897 default:
20898 return false;
20899 }
20900 }
20901
20902 /* Set preprocessor callbacks and options for modules. */
20903
20904 void
20905 module_preprocess_options (cpp_reader *reader)
20906 {
20907 gcc_checking_assert (!lang_hooks.preprocess_undef);
20908 if (modules_p ())
20909 {
20910 auto *cb = cpp_get_callbacks (reader);
20911
20912 cb->translate_include = maybe_translate_include;
20913 cb->user_deferred_macro = module_state::deferred_macro;
20914 if (flag_header_unit)
20915 {
20916 /* If the preprocessor hook is already in use, that
20917 implementation will call the undef langhook. */
20918 if (cb->undef)
20919 lang_hooks.preprocess_undef = module_state::undef_macro;
20920 else
20921 cb->undef = module_state::undef_macro;
20922 }
20923 auto *opt = cpp_get_options (reader);
20924 opt->module_directives = true;
20925 opt->main_search = cpp_main_search (flag_header_unit);
20926 }
20927 }
20928
20929 #include "gt-cp-module.h"