c++: Fix null this pointer [PR 98624]
[gcc.git] / gcc / cp / module.cc
1 /* C++ modules. Experimental!
2 Copyright (C) 2017-2021 Free Software Foundation, Inc.
3 Written by Nathan Sidwell <nathan@acm.org> while at FaceBook
4
5 This file is part of GCC.
6
7 GCC is free software; you can redistribute it and/or modify it
8 under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3, or (at your option)
10 any later version.
11
12 GCC is distributed in the hope that it will be useful, but
13 WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
20
21 /* Comments in this file have a non-negligible chance of being wrong
22 or at least inaccurate. Due to (a) my misunderstanding, (b)
23 ambiguities that I have interpretted differently to original intent
24 (c) changes in the specification, (d) my poor wording, (e) source
25 changes. */
26
27 /* (Incomplete) Design Notes
28
29 A hash table contains all module names. Imported modules are
30 present in a modules array, which by construction places an
31 import's dependencies before the import itself. The single
32 exception is the current TU, which always occupies slot zero (even
33 when it is not a module).
34
35 Imported decls occupy an entity_ary, an array of binding_slots, indexed
36 by importing module and index within that module. A flat index is
37 used, as each module reserves a contiguous range of indices.
38 Initially each slot indicates the CMI section containing the
39 streamed decl. When the decl is imported it will point to the decl
40 itself.
41
42 Additionally each imported decl is mapped in the entity_map via its
43 DECL_UID to the flat index in the entity_ary. Thus we can locate
44 the index for any imported decl by using this map and then
45 de-flattening the index via a binary seach of the module vector.
46 Cross-module references are by (remapped) module number and
47 module-local index.
48
49 Each importable DECL contains several flags. The simple set are
50 DECL_EXPORT_P, DECL_MODULE_PURVIEW_P and DECL_MODULE_IMPORT_P. The
51 first indicates whether it is exported, the second whether it is in
52 the module purview (as opposed to the global module fragment), and
53 the third indicates whether it was an import into this TU or not.
54
55 The more detailed flags are DECL_MODULE_PARTITION_P,
56 DECL_MODULE_ENTITY_P & DECL_MODULE_PENDING_SPECIALIZATIONS_P. The
57 first is set in a primary interface unit on decls that were read
58 from module partitions (these will have DECL_MODULE_IMPORT_P set
59 too). Such decls will be streamed out to the primary's CMI.
60 DECL_MODULE_ENTITY_P is set when an entity is imported, even if it
61 matched a non-imported entity. Such a decl will not have
62 DECL_MODULE_IMPORT_P set, even though it has an entry in the entity
63 map and array. DECL_MODULE_PENDING_SPECIALIZATIONS_P is set on a
64 primary template, and indicates there are specializations that
65 should be streamed in before trying to specialize this template.
66
67 Header units are module-like.
68
69 For namespace-scope lookup, the decls for a particular module are
70 held located in a sparse array hanging off the binding of the name.
71 This is partitioned into two: a few fixed slots at the start
72 followed by the sparse slots afterwards. By construction we only
73 need to append new slots to the end -- there is never a need to
74 insert in the middle. The fixed slots are MODULE_SLOT_CURRENT for
75 the current TU (regardless of whether it is a module or not),
76 MODULE_SLOT_GLOBAL and MODULE_SLOT_PARTITION. These latter two
77 slots are used for merging entities across the global module and
78 module partitions respectively. MODULE_SLOT_PARTITION is only
79 present in a module. Neither of those two slots is searched during
80 name lookup -- they are internal use only. This vector is created
81 lazily once we require it, if there is only a declaration from the
82 current TU, a regular binding is present. It is converted on
83 demand.
84
85 OPTIMIZATION: Outside of the current TU, we only need ADL to work.
86 We could optimize regular lookup for the current TU by glomming all
87 the visible decls on its slot. Perhaps wait until design is a
88 little more settled though.
89
90 There is only one instance of each extern-linkage namespace. It
91 appears in every module slot that makes it visible. It also
92 appears in MODULE_SLOT_GLOBAL. (It is an ODR violation if they
93 collide with some other global module entity.) We also have an
94 optimization that shares the slot for adjacent modules that declare
95 the same such namespace.
96
97 A module interface compilation produces a Compiled Module Interface
98 (CMI). The format used is Encapsulated Lazy Records Of Numbered
99 Declarations, which is essentially ELF's section encapsulation. (As
100 all good nerds are aware, Elrond is half Elf.) Some sections are
101 named, and contain information about the module as a whole (indices
102 etc), and other sections are referenced by number. Although I
103 don't defend against actively hostile CMIs, there is some
104 checksumming involved to verify data integrity. When dumping out
105 an interface, we generate a graph of all the
106 independently-redeclarable DECLS that are needed, and the decls
107 they reference. From that we determine the strongly connected
108 components (SCC) within this TU. Each SCC is dumped to a separate
109 numbered section of the CMI. We generate a binding table section,
110 mapping each namespace&name to a defining section. This allows
111 lazy loading.
112
113 Lazy loading employs mmap to map a read-only image of the CMI.
114 It thus only occupies address space and is paged in on demand,
115 backed by the CMI file itself. If mmap is unavailable, regular
116 FILEIO is used. Also, there's a bespoke ELF reader/writer here,
117 which implements just the section table and sections (including
118 string sections) of a 32-bit ELF in host byte-order. You can of
119 course inspect it with readelf. I figured 32-bit is sufficient,
120 for a single module. I detect running out of section numbers, but
121 do not implement the ELF overflow mechanism. At least you'll get
122 an error if that happens.
123
124 We do not separate declarations and definitions. My guess is that
125 if you refer to the declaration, you'll also need the definition
126 (template body, inline function, class definition etc). But this
127 does mean we can get larger SCCs than if we separated them. It is
128 unclear whether this is a win or not.
129
130 Notice that we embed section indices into the contents of other
131 sections. Thus random manipulation of the CMI file by ELF tools
132 may well break it. The kosher way would probably be to introduce
133 indirection via section symbols, but that would require defining a
134 relocation type.
135
136 Notice that lazy loading of one module's decls can cause lazy
137 loading of other decls in the same or another module. Clearly we
138 want to avoid loops. In a correct program there can be no loops in
139 the module dependency graph, and the above-mentioned SCC algorithm
140 places all intra-module circular dependencies in the same SCC. It
141 also orders the SCCs wrt each other, so dependent SCCs come first.
142 As we load dependent modules first, we know there can be no
143 reference to a higher-numbered module, and because we write out
144 dependent SCCs first, likewise for SCCs within the module. This
145 allows us to immediately detect broken references. When loading,
146 we must ensure the rest of the compiler doesn't cause some
147 unconnected load to occur (for instance, instantiate a template).
148
149 Classes used:
150
151 dumper - logger
152
153 data - buffer
154
155 bytes - data streamer
156 bytes_in : bytes - scalar reader
157 bytes_out : bytes - scalar writer
158
159 elf - ELROND format
160 elf_in : elf - ELROND reader
161 elf_out : elf - ELROND writer
162
163 trees_in : bytes_in - tree reader
164 trees_out : bytes_out - tree writer
165
166 depset - dependency set
167 depset::hash - hash table of depsets
168 depset::tarjan - SCC determinator
169
170 uidset<T> - set T's related to a UID
171 uidset<T>::hash hash table of uidset<T>
172
173 loc_spans - location map data
174
175 module_state - module object
176
177 slurping - data needed during loading
178
179 macro_import - imported macro data
180 macro_export - exported macro data
181
182 The ELROND objects use mmap, for both reading and writing. If mmap
183 is unavailable, fileno IO is used to read and write blocks of data.
184
185 The mapper object uses fileno IO to communicate with the server or
186 program. */
187
188 /* In expermental (trunk) sources, MODULE_VERSION is a #define passed
189 in from the Makefile. It records the modification date of the
190 source directory -- that's the only way to stay sane. In release
191 sources, we (plan to) use the compiler's major.minor versioning.
192 While the format might not change between at minor versions, it
193 seems simplest to tie the two together. There's no concept of
194 inter-version compatibility. */
195 #define IS_EXPERIMENTAL(V) ((V) >= (1U << 20))
196 #define MODULE_MAJOR(V) ((V) / 10000)
197 #define MODULE_MINOR(V) ((V) % 10000)
198 #define EXPERIMENT(A,B) (IS_EXPERIMENTAL (MODULE_VERSION) ? (A) : (B))
199 #ifndef MODULE_VERSION
200 // Be sure you're ready! Remove #error this before release!
201 #error "Shtopp! What are you doing? This is not ready yet."
202 #include "bversion.h"
203 #define MODULE_VERSION (BUILDING_GCC_MAJOR * 10000U + BUILDING_GCC_MINOR)
204 #elif !IS_EXPERIMENTAL (MODULE_VERSION)
205 #error "This is not the version I was looking for."
206 #endif
207
208 #define _DEFAULT_SOURCE 1 /* To get TZ field of struct tm, if available. */
209 #include "config.h"
210 #define INCLUDE_STRING
211 #define INCLUDE_VECTOR
212 #include "system.h"
213 #include "coretypes.h"
214 #include "cp-tree.h"
215 #include "timevar.h"
216 #include "stringpool.h"
217 #include "dumpfile.h"
218 #include "bitmap.h"
219 #include "cgraph.h"
220 #include "tree-iterator.h"
221 #include "cpplib.h"
222 #include "mkdeps.h"
223 #include "incpath.h"
224 #include "libiberty.h"
225 #include "stor-layout.h"
226 #include "version.h"
227 #include "tree-diagnostic.h"
228 #include "toplev.h"
229 #include "opts.h"
230 #include "attribs.h"
231 #include "intl.h"
232 #include "langhooks.h"
233 /* This TU doesn't need or want to see the networking. */
234 #define CODY_NETWORKING 0
235 #include "mapper-client.h"
236
237 #if 0 // 1 for testing no mmap
238 #define MAPPED_READING 0
239 #define MAPPED_WRITING 0
240 #else
241 #if HAVE_MMAP_FILE && _POSIX_MAPPED_FILES > 0
242 /* mmap, munmap. */
243 #define MAPPED_READING 1
244 #if HAVE_SYSCONF && defined (_SC_PAGE_SIZE)
245 /* msync, sysconf (_SC_PAGE_SIZE), ftruncate */
246 /* posix_fallocate used if available. */
247 #define MAPPED_WRITING 1
248 #else
249 #define MAPPED_WRITING 0
250 #endif
251 #else
252 #define MAPPED_READING 0
253 #define MAPPED_WRITING 0
254 #endif
255 #endif
256
257 /* Some open(2) flag differences, what a colourful world it is! */
258 #if defined (O_CLOEXEC)
259 // OK
260 #elif defined (_O_NOINHERIT)
261 /* Windows' _O_NOINHERIT matches O_CLOEXEC flag */
262 #define O_CLOEXEC _O_NOINHERIT
263 #else
264 #define O_CLOEXEC 0
265 #endif
266 #if defined (O_BINARY)
267 // Ok?
268 #elif defined (_O_BINARY)
269 /* Windows' open(2) call defaults to text! */
270 #define O_BINARY _O_BINARY
271 #else
272 #define O_BINARY 0
273 #endif
274
275 static inline cpp_hashnode *cpp_node (tree id)
276 {
277 return CPP_HASHNODE (GCC_IDENT_TO_HT_IDENT (id));
278 }
279 static inline tree identifier (cpp_hashnode *node)
280 {
281 return HT_IDENT_TO_GCC_IDENT (HT_NODE (node));
282 }
283 static inline const_tree identifier (const cpp_hashnode *node)
284 {
285 return identifier (const_cast <cpp_hashnode *> (node));
286 }
287
288 /* During duplicate detection we need to tell some comparators that
289 these are equivalent. */
290 tree map_context_from;
291 tree map_context_to;
292
293 /* Id for dumping module information. */
294 int module_dump_id;
295
296 /* We have a special module owner. */
297 #define MODULE_UNKNOWN (~0U) /* Not yet known. */
298
299 /* Prefix for section names. */
300 #define MOD_SNAME_PFX ".gnu.c++"
301
302 /* Format a version for user consumption. */
303
304 typedef char verstr_t[32];
305 static void
306 version2string (unsigned version, verstr_t &out)
307 {
308 unsigned major = MODULE_MAJOR (version);
309 unsigned minor = MODULE_MINOR (version);
310
311 if (IS_EXPERIMENTAL (version))
312 sprintf (out, "%04u/%02u/%02u-%02u:%02u%s",
313 2000 + major / 10000, (major / 100) % 100, (major % 100),
314 minor / 100, minor % 100,
315 EXPERIMENT ("", " (experimental)"));
316 else
317 sprintf (out, "%u.%u", major, minor);
318 }
319
320 /* Include files to note translation for. */
321 static vec<const char *, va_heap, vl_embed> *note_includes;
322
323 /* Traits to hash an arbitrary pointer. Entries are not deletable,
324 and removal is a noop (removal needed upon destruction). */
325 template <typename T>
326 struct nodel_ptr_hash : pointer_hash<T>, typed_noop_remove <T *> {
327 /* Nothing is deletable. Everything is insertable. */
328 static bool is_deleted (T *) { return false; }
329 static void mark_deleted (T *) { gcc_unreachable (); }
330 };
331
332 /* Map from pointer to signed integer. */
333 typedef simple_hashmap_traits<nodel_ptr_hash<void>, int> ptr_int_traits;
334 typedef hash_map<void *,signed,ptr_int_traits> ptr_int_hash_map;
335
336 /********************************************************************/
337 /* Basic streaming & ELF. Serialization is usually via mmap. For
338 writing we slide a buffer over the output file, syncing it
339 approproiately. For reading we simply map the whole file (as a
340 file-backed read-only map -- it's just address space, leaving the
341 OS pager to deal with getting the data to us). Some buffers need
342 to be more conventional malloc'd contents. */
343
344 /* Variable length buffer. */
345
346 class data {
347 public:
348 class allocator {
349 public:
350 /* Tools tend to moan if the dtor's not virtual. */
351 virtual ~allocator () {}
352
353 public:
354 void grow (data &obj, unsigned needed, bool exact);
355 void shrink (data &obj);
356
357 public:
358 virtual char *grow (char *ptr, unsigned needed);
359 virtual void shrink (char *ptr);
360 };
361
362 public:
363 char *buffer; /* Buffer being transferred. */
364 /* Although size_t would be the usual size, we know we never get
365 more than 4GB of buffer -- because that's the limit of the
366 encapsulation format. And if you need bigger imports, you're
367 doing it wrong. */
368 unsigned size; /* Allocated size of buffer. */
369 unsigned pos; /* Position in buffer. */
370
371 public:
372 data ()
373 :buffer (NULL), size (0), pos (0)
374 {
375 }
376 ~data ()
377 {
378 /* Make sure the derived and/or using class know what they're
379 doing. */
380 gcc_checking_assert (!buffer);
381 }
382
383 protected:
384 char *use (unsigned count)
385 {
386 if (size < pos + count)
387 return NULL;
388 char *res = &buffer[pos];
389 pos += count;
390 return res;
391 }
392
393 public:
394 void unuse (unsigned count)
395 {
396 pos -= count;
397 }
398
399 public:
400 static allocator simple_memory;
401 };
402
403 /* The simple data allocator. */
404 data::allocator data::simple_memory;
405
406 /* Grow buffer to at least size NEEDED. */
407
408 void
409 data::allocator::grow (data &obj, unsigned needed, bool exact)
410 {
411 gcc_checking_assert (needed ? needed > obj.size : !obj.size);
412 if (!needed)
413 /* Pick a default size. */
414 needed = EXPERIMENT (100, 1000);
415
416 if (!exact)
417 needed *= 2;
418 obj.buffer = grow (obj.buffer, needed);
419 if (obj.buffer)
420 obj.size = needed;
421 else
422 obj.pos = obj.size = 0;
423 }
424
425 /* Free a buffer. */
426
427 void
428 data::allocator::shrink (data &obj)
429 {
430 shrink (obj.buffer);
431 obj.buffer = NULL;
432 obj.size = 0;
433 }
434
435 char *
436 data::allocator::grow (char *ptr, unsigned needed)
437 {
438 return XRESIZEVAR (char, ptr, needed);
439 }
440
441 void
442 data::allocator::shrink (char *ptr)
443 {
444 XDELETEVEC (ptr);
445 }
446
447 /* Byte streamer base. Buffer with read/write position and smarts
448 for single bits. */
449
450 class bytes : public data {
451 public:
452 typedef data parent;
453
454 protected:
455 uint32_t bit_val; /* Bit buffer. */
456 unsigned bit_pos; /* Next bit in bit buffer. */
457
458 public:
459 bytes ()
460 :parent (), bit_val (0), bit_pos (0)
461 {}
462 ~bytes ()
463 {
464 }
465
466 protected:
467 unsigned calc_crc (unsigned) const;
468
469 protected:
470 /* Finish bit packet. Rewind the bytes not used. */
471 unsigned bit_flush ()
472 {
473 gcc_assert (bit_pos);
474 unsigned bytes = (bit_pos + 7) / 8;
475 unuse (4 - bytes);
476 bit_pos = 0;
477 bit_val = 0;
478 return bytes;
479 }
480 };
481
482 /* Calculate the crc32 of the buffer. Note the CRC is stored in the
483 first 4 bytes, so don't include them. */
484
485 unsigned
486 bytes::calc_crc (unsigned l) const
487 {
488 unsigned crc = 0;
489 for (size_t ix = 4; ix < l; ix++)
490 crc = crc32_byte (crc, buffer[ix]);
491 return crc;
492 }
493
494 class elf_in;
495
496 /* Byte stream reader. */
497
498 class bytes_in : public bytes {
499 typedef bytes parent;
500
501 protected:
502 bool overrun; /* Sticky read-too-much flag. */
503
504 public:
505 bytes_in ()
506 : parent (), overrun (false)
507 {
508 }
509 ~bytes_in ()
510 {
511 }
512
513 public:
514 /* Begin reading a named section. */
515 bool begin (location_t loc, elf_in *src, const char *name);
516 /* Begin reading a numbered section with optional name. */
517 bool begin (location_t loc, elf_in *src, unsigned, const char * = NULL);
518 /* Complete reading a buffer. Propagate errors and return true on
519 success. */
520 bool end (elf_in *src);
521 /* Return true if there is unread data. */
522 bool more_p () const
523 {
524 return pos != size;
525 }
526
527 public:
528 /* Start reading at OFFSET. */
529 void random_access (unsigned offset)
530 {
531 if (offset > size)
532 set_overrun ();
533 pos = offset;
534 bit_pos = bit_val = 0;
535 }
536
537 public:
538 void align (unsigned boundary)
539 {
540 if (unsigned pad = pos & (boundary - 1))
541 read (boundary - pad);
542 }
543
544 public:
545 const char *read (unsigned count)
546 {
547 char *ptr = use (count);
548 if (!ptr)
549 set_overrun ();
550 return ptr;
551 }
552
553 public:
554 bool check_crc () const;
555 /* We store the CRC in the first 4 bytes, using host endianness. */
556 unsigned get_crc () const
557 {
558 return *(const unsigned *)&buffer[0];
559 }
560
561 public:
562 /* Manipulate the overrun flag. */
563 bool get_overrun () const
564 {
565 return overrun;
566 }
567 void set_overrun ()
568 {
569 overrun = true;
570 }
571
572 public:
573 unsigned u32 (); /* Read uncompressed integer. */
574
575 public:
576 bool b (); /* Read a bool. */
577 void bflush (); /* Completed a block of bools. */
578
579 private:
580 void bfill (); /* Get the next block of bools. */
581
582 public:
583 int c (); /* Read a char. */
584 int i (); /* Read a signed int. */
585 unsigned u (); /* Read an unsigned int. */
586 size_t z (); /* Read a size_t. */
587 HOST_WIDE_INT wi (); /* Read a HOST_WIDE_INT. */
588 unsigned HOST_WIDE_INT wu (); /* Read an unsigned HOST_WIDE_INT. */
589 const char *str (size_t * = NULL); /* Read a string. */
590 const void *buf (size_t); /* Read a fixed-length buffer. */
591 cpp_hashnode *cpp_node (); /* Read a cpp node. */
592 };
593
594 /* Verify the buffer's CRC is correct. */
595
596 bool
597 bytes_in::check_crc () const
598 {
599 if (size < 4)
600 return false;
601
602 unsigned c_crc = calc_crc (size);
603 if (c_crc != get_crc ())
604 return false;
605
606 return true;
607 }
608
609 class elf_out;
610
611 /* Byte stream writer. */
612
613 class bytes_out : public bytes {
614 typedef bytes parent;
615
616 public:
617 allocator *memory; /* Obtainer of memory. */
618
619 public:
620 bytes_out (allocator *memory)
621 : parent (), memory (memory)
622 {
623 }
624 ~bytes_out ()
625 {
626 }
627
628 public:
629 bool streaming_p () const
630 {
631 return memory != NULL;
632 }
633
634 public:
635 void set_crc (unsigned *crc_ptr);
636
637 public:
638 /* Begin writing, maybe reserve space for CRC. */
639 void begin (bool need_crc = true);
640 /* Finish writing. Spill to section by number. */
641 unsigned end (elf_out *, unsigned, unsigned *crc_ptr = NULL);
642
643 public:
644 void align (unsigned boundary)
645 {
646 if (unsigned pad = pos & (boundary - 1))
647 write (boundary - pad);
648 }
649
650 public:
651 char *write (unsigned count, bool exact = false)
652 {
653 if (size < pos + count)
654 memory->grow (*this, pos + count, exact);
655 return use (count);
656 }
657
658 public:
659 void u32 (unsigned); /* Write uncompressed integer. */
660
661 public:
662 void b (bool); /* Write bool. */
663 void bflush (); /* Finish block of bools. */
664
665 public:
666 void c (unsigned char); /* Write unsigned char. */
667 void i (int); /* Write signed int. */
668 void u (unsigned); /* Write unsigned int. */
669 void z (size_t s); /* Write size_t. */
670 void wi (HOST_WIDE_INT); /* Write HOST_WIDE_INT. */
671 void wu (unsigned HOST_WIDE_INT); /* Write unsigned HOST_WIDE_INT. */
672 void str (const char *ptr)
673 {
674 str (ptr, strlen (ptr));
675 }
676 void cpp_node (const cpp_hashnode *node)
677 {
678 str ((const char *)NODE_NAME (node), NODE_LEN (node));
679 }
680 void str (const char *, size_t); /* Write string of known length. */
681 void buf (const void *, size_t); /* Write fixed length buffer. */
682 void *buf (size_t); /* Create a writable buffer */
683
684 public:
685 /* Format a NUL-terminated raw string. */
686 void printf (const char *, ...) ATTRIBUTE_PRINTF_2;
687 void print_time (const char *, const tm *, const char *);
688
689 public:
690 /* Dump instrumentation. */
691 static void instrument ();
692
693 protected:
694 /* Instrumentation. */
695 static unsigned spans[4];
696 static unsigned lengths[4];
697 static int is_set;
698 };
699
700 /* Instrumentation. */
701 unsigned bytes_out::spans[4];
702 unsigned bytes_out::lengths[4];
703 int bytes_out::is_set = -1;
704
705 /* If CRC_PTR non-null, set the CRC of the buffer. Mix the CRC into
706 that pointed to by CRC_PTR. */
707
708 void
709 bytes_out::set_crc (unsigned *crc_ptr)
710 {
711 if (crc_ptr)
712 {
713 gcc_checking_assert (pos >= 4);
714
715 unsigned crc = calc_crc (pos);
716 unsigned accum = *crc_ptr;
717 /* Only mix the existing *CRC_PTR if it is non-zero. */
718 accum = accum ? crc32_unsigned (accum, crc) : crc;
719 *crc_ptr = accum;
720
721 /* Buffer will be sufficiently aligned. */
722 *(unsigned *)buffer = crc;
723 }
724 }
725
726 /* Finish a set of bools. */
727
728 void
729 bytes_out::bflush ()
730 {
731 if (bit_pos)
732 {
733 u32 (bit_val);
734 lengths[2] += bit_flush ();
735 }
736 spans[2]++;
737 is_set = -1;
738 }
739
740 void
741 bytes_in::bflush ()
742 {
743 if (bit_pos)
744 bit_flush ();
745 }
746
747 /* When reading, we don't know how many bools we'll read in. So read
748 4 bytes-worth, and then rewind when flushing if we didn't need them
749 all. You can't have a block of bools closer than 4 bytes to the
750 end of the buffer. */
751
752 void
753 bytes_in::bfill ()
754 {
755 bit_val = u32 ();
756 }
757
758 /* Bools are packed into bytes. You cannot mix bools and non-bools.
759 You must call bflush before emitting another type. So batch your
760 bools.
761
762 It may be worth optimizing for most bools being zero. Some kind of
763 run-length encoding? */
764
765 void
766 bytes_out::b (bool x)
767 {
768 if (is_set != x)
769 {
770 is_set = x;
771 spans[x]++;
772 }
773 lengths[x]++;
774 bit_val |= unsigned (x) << bit_pos++;
775 if (bit_pos == 32)
776 {
777 u32 (bit_val);
778 lengths[2] += bit_flush ();
779 }
780 }
781
782 bool
783 bytes_in::b ()
784 {
785 if (!bit_pos)
786 bfill ();
787 bool v = (bit_val >> bit_pos++) & 1;
788 if (bit_pos == 32)
789 bit_flush ();
790 return v;
791 }
792
793 /* Exactly 4 bytes. Used internally for bool packing and a few other
794 places. We can't simply use uint32_t because (a) alignment and
795 (b) we need little-endian for the bool streaming rewinding to make
796 sense. */
797
798 void
799 bytes_out::u32 (unsigned val)
800 {
801 if (char *ptr = write (4))
802 {
803 ptr[0] = val;
804 ptr[1] = val >> 8;
805 ptr[2] = val >> 16;
806 ptr[3] = val >> 24;
807 }
808 }
809
810 unsigned
811 bytes_in::u32 ()
812 {
813 unsigned val = 0;
814 if (const char *ptr = read (4))
815 {
816 val |= (unsigned char)ptr[0];
817 val |= (unsigned char)ptr[1] << 8;
818 val |= (unsigned char)ptr[2] << 16;
819 val |= (unsigned char)ptr[3] << 24;
820 }
821
822 return val;
823 }
824
825 /* Chars are unsigned and written as single bytes. */
826
827 void
828 bytes_out::c (unsigned char v)
829 {
830 if (char *ptr = write (1))
831 *ptr = v;
832 }
833
834 int
835 bytes_in::c ()
836 {
837 int v = 0;
838 if (const char *ptr = read (1))
839 v = (unsigned char)ptr[0];
840 return v;
841 }
842
843 /* Ints 7-bit as a byte. Otherwise a 3bit count of following bytes in
844 big-endian form. 4 bits are in the first byte. */
845
846 void
847 bytes_out::i (int v)
848 {
849 if (char *ptr = write (1))
850 {
851 if (v <= 0x3f && v >= -0x40)
852 *ptr = v & 0x7f;
853 else
854 {
855 unsigned bytes = 0;
856 int probe;
857 if (v >= 0)
858 for (probe = v >> 8; probe > 0x7; probe >>= 8)
859 bytes++;
860 else
861 for (probe = v >> 8; probe < -0x8; probe >>= 8)
862 bytes++;
863 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
864 if ((ptr = write (++bytes)))
865 for (; bytes--; v >>= 8)
866 ptr[bytes] = v & 0xff;
867 }
868 }
869 }
870
871 int
872 bytes_in::i ()
873 {
874 int v = 0;
875 if (const char *ptr = read (1))
876 {
877 v = *ptr & 0xff;
878 if (v & 0x80)
879 {
880 unsigned bytes = (v >> 4) & 0x7;
881 v &= 0xf;
882 if (v & 0x8)
883 v |= -1 ^ 0x7;
884 if ((ptr = read (++bytes)))
885 while (bytes--)
886 v = (v << 8) | (*ptr++ & 0xff);
887 }
888 else if (v & 0x40)
889 v |= -1 ^ 0x3f;
890 }
891
892 return v;
893 }
894
895 void
896 bytes_out::u (unsigned v)
897 {
898 if (char *ptr = write (1))
899 {
900 if (v <= 0x7f)
901 *ptr = v;
902 else
903 {
904 unsigned bytes = 0;
905 unsigned probe;
906 for (probe = v >> 8; probe > 0xf; probe >>= 8)
907 bytes++;
908 *ptr = 0x80 | bytes << 4 | probe;
909 if ((ptr = write (++bytes)))
910 for (; bytes--; v >>= 8)
911 ptr[bytes] = v & 0xff;
912 }
913 }
914 }
915
916 unsigned
917 bytes_in::u ()
918 {
919 unsigned v = 0;
920
921 if (const char *ptr = read (1))
922 {
923 v = *ptr & 0xff;
924 if (v & 0x80)
925 {
926 unsigned bytes = (v >> 4) & 0x7;
927 v &= 0xf;
928 if ((ptr = read (++bytes)))
929 while (bytes--)
930 v = (v << 8) | (*ptr++ & 0xff);
931 }
932 }
933
934 return v;
935 }
936
937 void
938 bytes_out::wi (HOST_WIDE_INT v)
939 {
940 if (char *ptr = write (1))
941 {
942 if (v <= 0x3f && v >= -0x40)
943 *ptr = v & 0x7f;
944 else
945 {
946 unsigned bytes = 0;
947 HOST_WIDE_INT probe;
948 if (v >= 0)
949 for (probe = v >> 8; probe > 0x7; probe >>= 8)
950 bytes++;
951 else
952 for (probe = v >> 8; probe < -0x8; probe >>= 8)
953 bytes++;
954 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
955 if ((ptr = write (++bytes)))
956 for (; bytes--; v >>= 8)
957 ptr[bytes] = v & 0xff;
958 }
959 }
960 }
961
962 HOST_WIDE_INT
963 bytes_in::wi ()
964 {
965 HOST_WIDE_INT v = 0;
966 if (const char *ptr = read (1))
967 {
968 v = *ptr & 0xff;
969 if (v & 0x80)
970 {
971 unsigned bytes = (v >> 4) & 0x7;
972 v &= 0xf;
973 if (v & 0x8)
974 v |= -1 ^ 0x7;
975 if ((ptr = read (++bytes)))
976 while (bytes--)
977 v = (v << 8) | (*ptr++ & 0xff);
978 }
979 else if (v & 0x40)
980 v |= -1 ^ 0x3f;
981 }
982
983 return v;
984 }
985
986 /* unsigned wide ints are just written as signed wide ints. */
987
988 inline void
989 bytes_out::wu (unsigned HOST_WIDE_INT v)
990 {
991 wi ((HOST_WIDE_INT) v);
992 }
993
994 inline unsigned HOST_WIDE_INT
995 bytes_in::wu ()
996 {
997 return (unsigned HOST_WIDE_INT) wi ();
998 }
999
1000 /* size_t written as unsigned or unsigned wide int. */
1001
1002 inline void
1003 bytes_out::z (size_t s)
1004 {
1005 if (sizeof (s) == sizeof (unsigned))
1006 u (s);
1007 else
1008 wu (s);
1009 }
1010
1011 inline size_t
1012 bytes_in::z ()
1013 {
1014 if (sizeof (size_t) == sizeof (unsigned))
1015 return u ();
1016 else
1017 return wu ();
1018 }
1019
1020 /* Buffer simply memcpied. */
1021 void *
1022 bytes_out::buf (size_t len)
1023 {
1024 align (sizeof (void *) * 2);
1025 return write (len);
1026 }
1027
1028 void
1029 bytes_out::buf (const void *src, size_t len)
1030 {
1031 if (void *ptr = buf (len))
1032 memcpy (ptr, src, len);
1033 }
1034
1035 const void *
1036 bytes_in::buf (size_t len)
1037 {
1038 align (sizeof (void *) * 2);
1039 const char *ptr = read (len);
1040
1041 return ptr;
1042 }
1043
1044 /* strings as an size_t length, followed by the buffer. Make sure
1045 there's a NUL terminator on read. */
1046
1047 void
1048 bytes_out::str (const char *string, size_t len)
1049 {
1050 z (len);
1051 if (len)
1052 {
1053 gcc_checking_assert (!string[len]);
1054 buf (string, len + 1);
1055 }
1056 }
1057
1058 const char *
1059 bytes_in::str (size_t *len_p)
1060 {
1061 size_t len = z ();
1062
1063 /* We're about to trust some user data. */
1064 if (overrun)
1065 len = 0;
1066 if (len_p)
1067 *len_p = len;
1068 const char *str = NULL;
1069 if (len)
1070 {
1071 str = reinterpret_cast<const char *> (buf (len + 1));
1072 if (!str || str[len])
1073 {
1074 set_overrun ();
1075 str = NULL;
1076 }
1077 }
1078 return str ? str : "";
1079 }
1080
1081 cpp_hashnode *
1082 bytes_in::cpp_node ()
1083 {
1084 size_t len;
1085 const char *s = str (&len);
1086 if (!len)
1087 return NULL;
1088 return ::cpp_node (get_identifier_with_length (s, len));
1089 }
1090
1091 /* Format a string directly to the buffer, including a terminating
1092 NUL. Intended for human consumption. */
1093
1094 void
1095 bytes_out::printf (const char *format, ...)
1096 {
1097 va_list args;
1098 /* Exercise buffer expansion. */
1099 size_t len = EXPERIMENT (10, 500);
1100
1101 while (char *ptr = write (len))
1102 {
1103 va_start (args, format);
1104 size_t actual = vsnprintf (ptr, len, format, args) + 1;
1105 va_end (args);
1106 if (actual <= len)
1107 {
1108 unuse (len - actual);
1109 break;
1110 }
1111 unuse (len);
1112 len = actual;
1113 }
1114 }
1115
1116 void
1117 bytes_out::print_time (const char *kind, const tm *time, const char *tz)
1118 {
1119 printf ("%stime: %4u/%02u/%02u %02u:%02u:%02u %s",
1120 kind, time->tm_year + 1900, time->tm_mon + 1, time->tm_mday,
1121 time->tm_hour, time->tm_min, time->tm_sec, tz);
1122 }
1123
1124 /* Encapsulated Lazy Records Of Named Declarations.
1125 Header: Stunningly Elf32_Ehdr-like
1126 Sections: Sectional data
1127 [1-N) : User data sections
1128 N .strtab : strings, stunningly ELF STRTAB-like
1129 Index: Section table, stunningly ELF32_Shdr-like. */
1130
1131 class elf {
1132 protected:
1133 /* Constants used within the format. */
1134 enum private_constants {
1135 /* File kind. */
1136 ET_NONE = 0,
1137 EM_NONE = 0,
1138 OSABI_NONE = 0,
1139
1140 /* File format. */
1141 EV_CURRENT = 1,
1142 CLASS32 = 1,
1143 DATA2LSB = 1,
1144 DATA2MSB = 2,
1145
1146 /* Section numbering. */
1147 SHN_UNDEF = 0,
1148 SHN_LORESERVE = 0xff00,
1149 SHN_XINDEX = 0xffff,
1150
1151 /* Section types. */
1152 SHT_NONE = 0, /* No contents. */
1153 SHT_PROGBITS = 1, /* Random bytes. */
1154 SHT_STRTAB = 3, /* A string table. */
1155
1156 /* Section flags. */
1157 SHF_NONE = 0x00, /* Nothing. */
1158 SHF_STRINGS = 0x20, /* NUL-Terminated strings. */
1159
1160 /* I really hope we do not get CMI files larger than 4GB. */
1161 MY_CLASS = CLASS32,
1162 /* It is host endianness that is relevant. */
1163 MY_ENDIAN = DATA2LSB
1164 #ifdef WORDS_BIGENDIAN
1165 ^ DATA2LSB ^ DATA2MSB
1166 #endif
1167 };
1168
1169 public:
1170 /* Constants visible to users. */
1171 enum public_constants {
1172 /* Special error codes. Breaking layering a bit. */
1173 E_BAD_DATA = -1, /* Random unexpected data errors. */
1174 E_BAD_LAZY = -2, /* Badly ordered laziness. */
1175 E_BAD_IMPORT = -3 /* A nested import failed. */
1176 };
1177
1178 protected:
1179 /* File identification. On-disk representation. */
1180 struct ident {
1181 uint8_t magic[4]; /* 0x7f, 'E', 'L', 'F' */
1182 uint8_t klass; /* 4:CLASS32 */
1183 uint8_t data; /* 5:DATA2[LM]SB */
1184 uint8_t version; /* 6:EV_CURRENT */
1185 uint8_t osabi; /* 7:OSABI_NONE */
1186 uint8_t abiver; /* 8: 0 */
1187 uint8_t pad[7]; /* 9-15 */
1188 };
1189 /* File header. On-disk representation. */
1190 struct header {
1191 struct ident ident;
1192 uint16_t type; /* ET_NONE */
1193 uint16_t machine; /* EM_NONE */
1194 uint32_t version; /* EV_CURRENT */
1195 uint32_t entry; /* 0 */
1196 uint32_t phoff; /* 0 */
1197 uint32_t shoff; /* Section Header Offset in file */
1198 uint32_t flags;
1199 uint16_t ehsize; /* ELROND Header SIZE -- sizeof (header) */
1200 uint16_t phentsize; /* 0 */
1201 uint16_t phnum; /* 0 */
1202 uint16_t shentsize; /* Section Header SIZE -- sizeof (section) */
1203 uint16_t shnum; /* Section Header NUM */
1204 uint16_t shstrndx; /* Section Header STRing iNDeX */
1205 };
1206 /* File section. On-disk representation. */
1207 struct section {
1208 uint32_t name; /* String table offset. */
1209 uint32_t type; /* SHT_* */
1210 uint32_t flags; /* SHF_* */
1211 uint32_t addr; /* 0 */
1212 uint32_t offset; /* OFFSET in file */
1213 uint32_t size; /* SIZE of section */
1214 uint32_t link; /* 0 */
1215 uint32_t info; /* 0 */
1216 uint32_t addralign; /* 0 */
1217 uint32_t entsize; /* ENTry SIZE, usually 0 */
1218 };
1219
1220 protected:
1221 data hdr; /* The header. */
1222 data sectab; /* The section table. */
1223 data strtab; /* String table. */
1224 int fd; /* File descriptor we're reading or writing. */
1225 int err; /* Sticky error code. */
1226
1227 public:
1228 /* Construct from STREAM. E is errno if STREAM NULL. */
1229 elf (int fd, int e)
1230 :hdr (), sectab (), strtab (), fd (fd), err (fd >= 0 ? 0 : e)
1231 {}
1232 ~elf ()
1233 {
1234 gcc_checking_assert (fd < 0 && !hdr.buffer
1235 && !sectab.buffer && !strtab.buffer);
1236 }
1237
1238 public:
1239 /* Return the error, if we have an error. */
1240 int get_error () const
1241 {
1242 return err;
1243 }
1244 /* Set the error, unless it's already been set. */
1245 void set_error (int e = E_BAD_DATA)
1246 {
1247 if (!err)
1248 err = e;
1249 }
1250 /* Get an error string. */
1251 const char *get_error (const char *) const;
1252
1253 public:
1254 /* Begin reading/writing file. Return false on error. */
1255 bool begin () const
1256 {
1257 return !get_error ();
1258 }
1259 /* Finish reading/writing file. Return false on error. */
1260 bool end ();
1261 };
1262
1263 /* Return error string. */
1264
1265 const char *
1266 elf::get_error (const char *name) const
1267 {
1268 if (!name)
1269 return "Unknown CMI mapping";
1270
1271 switch (err)
1272 {
1273 case 0:
1274 gcc_unreachable ();
1275 case E_BAD_DATA:
1276 return "Bad file data";
1277 case E_BAD_IMPORT:
1278 return "Bad import dependency";
1279 case E_BAD_LAZY:
1280 return "Bad lazy ordering";
1281 default:
1282 return xstrerror (err);
1283 }
1284 }
1285
1286 /* Finish file, return true if there's an error. */
1287
1288 bool
1289 elf::end ()
1290 {
1291 /* Close the stream and free the section table. */
1292 if (fd >= 0 && close (fd))
1293 set_error (errno);
1294 fd = -1;
1295
1296 return !get_error ();
1297 }
1298
1299 /* ELROND reader. */
1300
1301 class elf_in : public elf {
1302 typedef elf parent;
1303
1304 private:
1305 /* For freezing & defrosting. */
1306 #if !defined (HOST_LACKS_INODE_NUMBERS)
1307 dev_t device;
1308 ino_t inode;
1309 #endif
1310
1311 public:
1312 elf_in (int fd, int e)
1313 :parent (fd, e)
1314 {
1315 }
1316 ~elf_in ()
1317 {
1318 }
1319
1320 public:
1321 bool is_frozen () const
1322 {
1323 return fd < 0 && hdr.pos;
1324 }
1325 bool is_freezable () const
1326 {
1327 return fd >= 0 && hdr.pos;
1328 }
1329 void freeze ();
1330 bool defrost (const char *);
1331
1332 /* If BYTES is in the mmapped area, allocate a new buffer for it. */
1333 void preserve (bytes_in &bytes ATTRIBUTE_UNUSED)
1334 {
1335 #if MAPPED_READING
1336 if (hdr.buffer && bytes.buffer >= hdr.buffer
1337 && bytes.buffer < hdr.buffer + hdr.pos)
1338 {
1339 char *buf = bytes.buffer;
1340 bytes.buffer = data::simple_memory.grow (NULL, bytes.size);
1341 memcpy (bytes.buffer, buf, bytes.size);
1342 }
1343 #endif
1344 }
1345 /* If BYTES is not in SELF's mmapped area, free it. SELF might be
1346 NULL. */
1347 static void release (elf_in *self ATTRIBUTE_UNUSED, bytes_in &bytes)
1348 {
1349 #if MAPPED_READING
1350 if (!(self && self->hdr.buffer && bytes.buffer >= self->hdr.buffer
1351 && bytes.buffer < self->hdr.buffer + self->hdr.pos))
1352 #endif
1353 data::simple_memory.shrink (bytes.buffer);
1354 bytes.buffer = NULL;
1355 bytes.size = 0;
1356 }
1357
1358 public:
1359 static void grow (data &data, unsigned needed)
1360 {
1361 gcc_checking_assert (!data.buffer);
1362 #if !MAPPED_READING
1363 data.buffer = XNEWVEC (char, needed);
1364 #endif
1365 data.size = needed;
1366 }
1367 static void shrink (data &data)
1368 {
1369 #if !MAPPED_READING
1370 XDELETEVEC (data.buffer);
1371 #endif
1372 data.buffer = NULL;
1373 data.size = 0;
1374 }
1375
1376 public:
1377 const section *get_section (unsigned s) const
1378 {
1379 if (s * sizeof (section) < sectab.size)
1380 return reinterpret_cast<const section *>
1381 (&sectab.buffer[s * sizeof (section)]);
1382 else
1383 return NULL;
1384 }
1385 unsigned get_section_limit () const
1386 {
1387 return sectab.size / sizeof (section);
1388 }
1389
1390 protected:
1391 const char *read (data *, unsigned, unsigned);
1392
1393 public:
1394 /* Read section by number. */
1395 bool read (data *d, const section *s)
1396 {
1397 return s && read (d, s->offset, s->size);
1398 }
1399
1400 /* Find section by name. */
1401 unsigned find (const char *name);
1402 /* Find section by index. */
1403 const section *find (unsigned snum, unsigned type = SHT_PROGBITS);
1404
1405 public:
1406 /* Release the string table, when we're done with it. */
1407 void release ()
1408 {
1409 shrink (strtab);
1410 }
1411
1412 public:
1413 bool begin (location_t);
1414 bool end ()
1415 {
1416 release ();
1417 #if MAPPED_READING
1418 if (hdr.buffer)
1419 munmap (hdr.buffer, hdr.pos);
1420 hdr.buffer = NULL;
1421 #endif
1422 shrink (sectab);
1423
1424 return parent::end ();
1425 }
1426
1427 public:
1428 /* Return string name at OFFSET. Checks OFFSET range. Always
1429 returns non-NULL. We know offset 0 is an empty string. */
1430 const char *name (unsigned offset)
1431 {
1432 return &strtab.buffer[offset < strtab.size ? offset : 0];
1433 }
1434 };
1435
1436 /* ELROND writer. */
1437
1438 class elf_out : public elf, public data::allocator {
1439 typedef elf parent;
1440 /* Desired section alignment on disk. */
1441 static const int SECTION_ALIGN = 16;
1442
1443 private:
1444 ptr_int_hash_map identtab; /* Map of IDENTIFIERS to strtab offsets. */
1445 unsigned pos; /* Write position in file. */
1446 #if MAPPED_WRITING
1447 unsigned offset; /* Offset of the mapping. */
1448 unsigned extent; /* Length of mapping. */
1449 unsigned page_size; /* System page size. */
1450 #endif
1451
1452 public:
1453 elf_out (int fd, int e)
1454 :parent (fd, e), identtab (500), pos (0)
1455 {
1456 #if MAPPED_WRITING
1457 offset = extent = 0;
1458 page_size = sysconf (_SC_PAGE_SIZE);
1459 if (page_size < SECTION_ALIGN)
1460 /* Something really strange. */
1461 set_error (EINVAL);
1462 #endif
1463 }
1464 ~elf_out ()
1465 {
1466 data::simple_memory.shrink (hdr);
1467 data::simple_memory.shrink (sectab);
1468 data::simple_memory.shrink (strtab);
1469 }
1470
1471 #if MAPPED_WRITING
1472 private:
1473 void create_mapping (unsigned ext, bool extending = true);
1474 void remove_mapping ();
1475 #endif
1476
1477 protected:
1478 using allocator::grow;
1479 virtual char *grow (char *, unsigned needed);
1480 #if MAPPED_WRITING
1481 using allocator::shrink;
1482 virtual void shrink (char *);
1483 #endif
1484
1485 public:
1486 unsigned get_section_limit () const
1487 {
1488 return sectab.pos / sizeof (section);
1489 }
1490
1491 protected:
1492 unsigned add (unsigned type, unsigned name = 0,
1493 unsigned off = 0, unsigned size = 0, unsigned flags = SHF_NONE);
1494 unsigned write (const data &);
1495 #if MAPPED_WRITING
1496 unsigned write (const bytes_out &);
1497 #endif
1498
1499 public:
1500 /* IDENTIFIER to strtab offset. */
1501 unsigned name (tree ident);
1502 /* String literal to strtab offset. */
1503 unsigned name (const char *n);
1504 /* Qualified name of DECL to strtab offset. */
1505 unsigned qualified_name (tree decl, bool is_defn);
1506
1507 private:
1508 unsigned strtab_write (const char *s, unsigned l);
1509 void strtab_write (tree decl, int);
1510
1511 public:
1512 /* Add a section with contents or strings. */
1513 unsigned add (const bytes_out &, bool string_p, unsigned name);
1514
1515 public:
1516 /* Begin and end writing. */
1517 bool begin ();
1518 bool end ();
1519 };
1520
1521 /* Begin reading section NAME (of type PROGBITS) from SOURCE.
1522 Data always checked for CRC. */
1523
1524 bool
1525 bytes_in::begin (location_t loc, elf_in *source, const char *name)
1526 {
1527 unsigned snum = source->find (name);
1528
1529 return begin (loc, source, snum, name);
1530 }
1531
1532 /* Begin reading section numbered SNUM with NAME (may be NULL). */
1533
1534 bool
1535 bytes_in::begin (location_t loc, elf_in *source, unsigned snum, const char *name)
1536 {
1537 if (!source->read (this, source->find (snum))
1538 || !size || !check_crc ())
1539 {
1540 source->set_error (elf::E_BAD_DATA);
1541 source->shrink (*this);
1542 if (name)
1543 error_at (loc, "section %qs is missing or corrupted", name);
1544 else
1545 error_at (loc, "section #%u is missing or corrupted", snum);
1546 return false;
1547 }
1548 pos = 4;
1549 return true;
1550 }
1551
1552 /* Finish reading a section. */
1553
1554 bool
1555 bytes_in::end (elf_in *src)
1556 {
1557 if (more_p ())
1558 set_overrun ();
1559 if (overrun)
1560 src->set_error ();
1561
1562 src->shrink (*this);
1563
1564 return !overrun;
1565 }
1566
1567 /* Begin writing buffer. */
1568
1569 void
1570 bytes_out::begin (bool need_crc)
1571 {
1572 if (need_crc)
1573 pos = 4;
1574 memory->grow (*this, 0, false);
1575 }
1576
1577 /* Finish writing buffer. Stream out to SINK as named section NAME.
1578 Return section number or 0 on failure. If CRC_PTR is true, crc
1579 the data. Otherwise it is a string section. */
1580
1581 unsigned
1582 bytes_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
1583 {
1584 lengths[3] += pos;
1585 spans[3]++;
1586
1587 set_crc (crc_ptr);
1588 unsigned sec_num = sink->add (*this, !crc_ptr, name);
1589 memory->shrink (*this);
1590
1591 return sec_num;
1592 }
1593
1594 /* Close and open the file, without destroying it. */
1595
1596 void
1597 elf_in::freeze ()
1598 {
1599 gcc_checking_assert (!is_frozen ());
1600 #if MAPPED_READING
1601 if (munmap (hdr.buffer, hdr.pos) < 0)
1602 set_error (errno);
1603 #endif
1604 if (close (fd) < 0)
1605 set_error (errno);
1606 fd = -1;
1607 }
1608
1609 bool
1610 elf_in::defrost (const char *name)
1611 {
1612 gcc_checking_assert (is_frozen ());
1613 struct stat stat;
1614
1615 fd = open (name, O_RDONLY | O_CLOEXEC | O_BINARY);
1616 if (fd < 0 || fstat (fd, &stat) < 0)
1617 set_error (errno);
1618 else
1619 {
1620 bool ok = hdr.pos == unsigned (stat.st_size);
1621 #ifndef HOST_LACKS_INODE_NUMBERS
1622 if (device != stat.st_dev
1623 || inode != stat.st_ino)
1624 ok = false;
1625 #endif
1626 if (!ok)
1627 set_error (EMFILE);
1628 #if MAPPED_READING
1629 if (ok)
1630 {
1631 char *mapping = reinterpret_cast<char *>
1632 (mmap (NULL, hdr.pos, PROT_READ, MAP_SHARED, fd, 0));
1633 if (mapping == MAP_FAILED)
1634 fail:
1635 set_error (errno);
1636 else
1637 {
1638 if (madvise (mapping, hdr.pos, MADV_RANDOM))
1639 goto fail;
1640
1641 /* These buffers are never NULL in this case. */
1642 strtab.buffer = mapping + strtab.pos;
1643 sectab.buffer = mapping + sectab.pos;
1644 hdr.buffer = mapping;
1645 }
1646 }
1647 #endif
1648 }
1649
1650 return !get_error ();
1651 }
1652
1653 /* Read at current position into BUFFER. Return true on success. */
1654
1655 const char *
1656 elf_in::read (data *data, unsigned pos, unsigned length)
1657 {
1658 #if MAPPED_READING
1659 if (pos + length > hdr.pos)
1660 {
1661 set_error (EINVAL);
1662 return NULL;
1663 }
1664 #else
1665 if (pos != ~0u && lseek (fd, pos, SEEK_SET) < 0)
1666 {
1667 set_error (errno);
1668 return NULL;
1669 }
1670 #endif
1671 grow (*data, length);
1672 #if MAPPED_READING
1673 data->buffer = hdr.buffer + pos;
1674 #else
1675 if (::read (fd, data->buffer, data->size) != ssize_t (length))
1676 {
1677 set_error (errno);
1678 shrink (*data);
1679 return NULL;
1680 }
1681 #endif
1682
1683 return data->buffer;
1684 }
1685
1686 /* Read section SNUM of TYPE. Return section pointer or NULL on error. */
1687
1688 const elf::section *
1689 elf_in::find (unsigned snum, unsigned type)
1690 {
1691 const section *sec = get_section (snum);
1692 if (!snum || !sec || sec->type != type)
1693 return NULL;
1694 return sec;
1695 }
1696
1697 /* Find a section NAME and TYPE. Return section number, or zero on
1698 failure. */
1699
1700 unsigned
1701 elf_in::find (const char *sname)
1702 {
1703 for (unsigned pos = sectab.size; pos -= sizeof (section); )
1704 {
1705 const section *sec
1706 = reinterpret_cast<const section *> (&sectab.buffer[pos]);
1707
1708 if (0 == strcmp (sname, name (sec->name)))
1709 return pos / sizeof (section);
1710 }
1711
1712 return 0;
1713 }
1714
1715 /* Begin reading file. Verify header. Pull in section and string
1716 tables. Return true on success. */
1717
1718 bool
1719 elf_in::begin (location_t loc)
1720 {
1721 if (!parent::begin ())
1722 return false;
1723
1724 struct stat stat;
1725 unsigned size = 0;
1726 if (!fstat (fd, &stat))
1727 {
1728 #if !defined (HOST_LACKS_INODE_NUMBERS)
1729 device = stat.st_dev;
1730 inode = stat.st_ino;
1731 #endif
1732 /* Never generate files > 4GB, check we've not been given one. */
1733 if (stat.st_size == unsigned (stat.st_size))
1734 size = unsigned (stat.st_size);
1735 }
1736
1737 #if MAPPED_READING
1738 /* MAP_SHARED so that the file is backing store. If someone else
1739 concurrently writes it, they're wrong. */
1740 void *mapping = mmap (NULL, size, PROT_READ, MAP_SHARED, fd, 0);
1741 if (mapping == MAP_FAILED)
1742 {
1743 fail:
1744 set_error (errno);
1745 return false;
1746 }
1747 /* We'll be hopping over this randomly. Some systems declare the
1748 first parm as char *, and other declare it as void *. */
1749 if (madvise (reinterpret_cast <char *> (mapping), size, MADV_RANDOM))
1750 goto fail;
1751
1752 hdr.buffer = (char *)mapping;
1753 #else
1754 read (&hdr, 0, sizeof (header));
1755 #endif
1756 hdr.pos = size; /* Record size of the file. */
1757
1758 const header *h = reinterpret_cast<const header *> (hdr.buffer);
1759 if (!h)
1760 return false;
1761
1762 if (h->ident.magic[0] != 0x7f
1763 || h->ident.magic[1] != 'E'
1764 || h->ident.magic[2] != 'L'
1765 || h->ident.magic[3] != 'F')
1766 {
1767 error_at (loc, "not Encapsulated Lazy Records of Named Declarations");
1768 failed:
1769 shrink (hdr);
1770 return false;
1771 }
1772
1773 /* We expect a particular format -- the ELF is not intended to be
1774 distributable. */
1775 if (h->ident.klass != MY_CLASS
1776 || h->ident.data != MY_ENDIAN
1777 || h->ident.version != EV_CURRENT
1778 || h->type != ET_NONE
1779 || h->machine != EM_NONE
1780 || h->ident.osabi != OSABI_NONE)
1781 {
1782 error_at (loc, "unexpected encapsulation format or type");
1783 goto failed;
1784 }
1785
1786 int e = -1;
1787 if (!h->shoff || h->shentsize != sizeof (section))
1788 {
1789 malformed:
1790 set_error (e);
1791 error_at (loc, "encapsulation is malformed");
1792 goto failed;
1793 }
1794
1795 unsigned strndx = h->shstrndx;
1796 unsigned shnum = h->shnum;
1797 if (shnum == SHN_XINDEX)
1798 {
1799 if (!read (&sectab, h->shoff, sizeof (section)))
1800 {
1801 section_table_fail:
1802 e = errno;
1803 goto malformed;
1804 }
1805 shnum = get_section (0)->size;
1806 /* Freeing does mean we'll re-read it in the case we're not
1807 mapping, but this is going to be rare. */
1808 shrink (sectab);
1809 }
1810
1811 if (!shnum)
1812 goto malformed;
1813
1814 if (!read (&sectab, h->shoff, shnum * sizeof (section)))
1815 goto section_table_fail;
1816
1817 if (strndx == SHN_XINDEX)
1818 strndx = get_section (0)->link;
1819
1820 if (!read (&strtab, find (strndx, SHT_STRTAB)))
1821 goto malformed;
1822
1823 /* The string table should be at least one byte, with NUL chars
1824 at either end. */
1825 if (!(strtab.size && !strtab.buffer[0]
1826 && !strtab.buffer[strtab.size - 1]))
1827 goto malformed;
1828
1829 #if MAPPED_READING
1830 /* Record the offsets of the section and string tables. */
1831 sectab.pos = h->shoff;
1832 strtab.pos = shnum * sizeof (section);
1833 #else
1834 shrink (hdr);
1835 #endif
1836
1837 return true;
1838 }
1839
1840 /* Create a new mapping. */
1841
1842 #if MAPPED_WRITING
1843 void
1844 elf_out::create_mapping (unsigned ext, bool extending)
1845 {
1846 #ifndef HAVE_POSIX_FALLOCATE
1847 #define posix_fallocate(fd,off,len) ftruncate (fd, off + len)
1848 #endif
1849 void *mapping = MAP_FAILED;
1850 if (extending && ext < 1024 * 1024)
1851 {
1852 if (!posix_fallocate (fd, offset, ext * 2))
1853 mapping = mmap (NULL, ext * 2, PROT_READ | PROT_WRITE,
1854 MAP_SHARED, fd, offset);
1855 if (mapping != MAP_FAILED)
1856 ext *= 2;
1857 }
1858 if (mapping == MAP_FAILED)
1859 {
1860 if (!extending || !posix_fallocate (fd, offset, ext))
1861 mapping = mmap (NULL, ext, PROT_READ | PROT_WRITE,
1862 MAP_SHARED, fd, offset);
1863 if (mapping == MAP_FAILED)
1864 {
1865 set_error (errno);
1866 mapping = NULL;
1867 ext = 0;
1868 }
1869 }
1870 #undef posix_fallocate
1871 hdr.buffer = (char *)mapping;
1872 extent = ext;
1873 }
1874 #endif
1875
1876 /* Flush out the current mapping. */
1877
1878 #if MAPPED_WRITING
1879 void
1880 elf_out::remove_mapping ()
1881 {
1882 if (hdr.buffer)
1883 {
1884 /* MS_ASYNC dtrt with the removed mapping, including a
1885 subsequent overlapping remap. */
1886 if (msync (hdr.buffer, extent, MS_ASYNC)
1887 || munmap (hdr.buffer, extent))
1888 /* We're somewhat screwed at this point. */
1889 set_error (errno);
1890 }
1891
1892 hdr.buffer = NULL;
1893 }
1894 #endif
1895
1896 /* Grow a mapping of PTR to be NEEDED bytes long. This gets
1897 interesting if the new size grows the EXTENT. */
1898
1899 char *
1900 elf_out::grow (char *data, unsigned needed)
1901 {
1902 if (!data)
1903 {
1904 /* First allocation, check we're aligned. */
1905 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
1906 #if MAPPED_WRITING
1907 data = hdr.buffer + (pos - offset);
1908 #endif
1909 }
1910
1911 #if MAPPED_WRITING
1912 unsigned off = data - hdr.buffer;
1913 if (off + needed > extent)
1914 {
1915 /* We need to grow the mapping. */
1916 unsigned lwm = off & ~(page_size - 1);
1917 unsigned hwm = (off + needed + page_size - 1) & ~(page_size - 1);
1918
1919 gcc_checking_assert (hwm > extent);
1920
1921 remove_mapping ();
1922
1923 offset += lwm;
1924 create_mapping (extent < hwm - lwm ? hwm - lwm : extent);
1925
1926 data = hdr.buffer + (off - lwm);
1927 }
1928 #else
1929 data = allocator::grow (data, needed);
1930 #endif
1931
1932 return data;
1933 }
1934
1935 #if MAPPED_WRITING
1936 /* Shrinking is a NOP. */
1937 void
1938 elf_out::shrink (char *)
1939 {
1940 }
1941 #endif
1942
1943 /* Write S of length L to the strtab buffer. L must include the ending
1944 NUL, if that's what you want. */
1945
1946 unsigned
1947 elf_out::strtab_write (const char *s, unsigned l)
1948 {
1949 if (strtab.pos + l > strtab.size)
1950 data::simple_memory.grow (strtab, strtab.pos + l, false);
1951 memcpy (strtab.buffer + strtab.pos, s, l);
1952 unsigned res = strtab.pos;
1953 strtab.pos += l;
1954 return res;
1955 }
1956
1957 /* Write qualified name of decl. INNER >0 if this is a definition, <0
1958 if this is a qualifier of an outer name. */
1959
1960 void
1961 elf_out::strtab_write (tree decl, int inner)
1962 {
1963 tree ctx = CP_DECL_CONTEXT (decl);
1964 if (TYPE_P (ctx))
1965 ctx = TYPE_NAME (ctx);
1966 if (ctx != global_namespace)
1967 strtab_write (ctx, -1);
1968
1969 tree name = DECL_NAME (decl);
1970 if (!name)
1971 name = DECL_ASSEMBLER_NAME_RAW (decl);
1972 strtab_write (IDENTIFIER_POINTER (name), IDENTIFIER_LENGTH (name));
1973
1974 if (inner)
1975 strtab_write (&"::{}"[inner+1], 2);
1976 }
1977
1978 /* Map IDENTIFIER IDENT to strtab offset. Inserts into strtab if not
1979 already there. */
1980
1981 unsigned
1982 elf_out::name (tree ident)
1983 {
1984 unsigned res = 0;
1985 if (ident)
1986 {
1987 bool existed;
1988 int *slot = &identtab.get_or_insert (ident, &existed);
1989 if (!existed)
1990 *slot = strtab_write (IDENTIFIER_POINTER (ident),
1991 IDENTIFIER_LENGTH (ident) + 1);
1992 res = *slot;
1993 }
1994 return res;
1995 }
1996
1997 /* Map LITERAL to strtab offset. Does not detect duplicates and
1998 expects LITERAL to remain live until strtab is written out. */
1999
2000 unsigned
2001 elf_out::name (const char *literal)
2002 {
2003 return strtab_write (literal, strlen (literal) + 1);
2004 }
2005
2006 /* Map a DECL's qualified name to strtab offset. Does not detect
2007 duplicates. */
2008
2009 unsigned
2010 elf_out::qualified_name (tree decl, bool is_defn)
2011 {
2012 gcc_checking_assert (DECL_P (decl) && decl != global_namespace);
2013 unsigned result = strtab.pos;
2014
2015 strtab_write (decl, is_defn);
2016 strtab_write ("", 1);
2017
2018 return result;
2019 }
2020
2021 /* Add section to file. Return section number. TYPE & NAME identify
2022 the section. OFF and SIZE identify the file location of its
2023 data. FLAGS contains additional info. */
2024
2025 unsigned
2026 elf_out::add (unsigned type, unsigned name, unsigned off, unsigned size,
2027 unsigned flags)
2028 {
2029 gcc_checking_assert (!(off & (SECTION_ALIGN - 1)));
2030 if (sectab.pos + sizeof (section) > sectab.size)
2031 data::simple_memory.grow (sectab, sectab.pos + sizeof (section), false);
2032 section *sec = reinterpret_cast<section *> (sectab.buffer + sectab.pos);
2033 memset (sec, 0, sizeof (section));
2034 sec->type = type;
2035 sec->flags = flags;
2036 sec->name = name;
2037 sec->offset = off;
2038 sec->size = size;
2039 if (flags & SHF_STRINGS)
2040 sec->entsize = 1;
2041
2042 unsigned res = sectab.pos;
2043 sectab.pos += sizeof (section);
2044 return res / sizeof (section);
2045 }
2046
2047 /* Pad to the next alignment boundary, then write BUFFER to disk.
2048 Return the position of the start of the write, or zero on failure. */
2049
2050 unsigned
2051 elf_out::write (const data &buffer)
2052 {
2053 #if MAPPED_WRITING
2054 /* HDR is always mapped. */
2055 if (&buffer != &hdr)
2056 {
2057 bytes_out out (this);
2058 grow (out, buffer.pos, true);
2059 if (out.buffer)
2060 memcpy (out.buffer, buffer.buffer, buffer.pos);
2061 shrink (out);
2062 }
2063 else
2064 /* We should have been aligned during the first allocation. */
2065 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
2066 #else
2067 if (::write (fd, buffer.buffer, buffer.pos) != ssize_t (buffer.pos))
2068 {
2069 set_error (errno);
2070 return 0;
2071 }
2072 #endif
2073 unsigned res = pos;
2074 pos += buffer.pos;
2075
2076 if (unsigned padding = -pos & (SECTION_ALIGN - 1))
2077 {
2078 #if !MAPPED_WRITING
2079 /* Align the section on disk, should help the necessary copies.
2080 fseeking to extend is non-portable. */
2081 static char zero[SECTION_ALIGN];
2082 if (::write (fd, &zero, padding) != ssize_t (padding))
2083 set_error (errno);
2084 #endif
2085 pos += padding;
2086 }
2087 return res;
2088 }
2089
2090 /* Write a streaming buffer. It must be using us as an allocator. */
2091
2092 #if MAPPED_WRITING
2093 unsigned
2094 elf_out::write (const bytes_out &buf)
2095 {
2096 gcc_checking_assert (buf.memory == this);
2097 /* A directly mapped buffer. */
2098 gcc_checking_assert (buf.buffer - hdr.buffer >= 0
2099 && buf.buffer - hdr.buffer + buf.size <= extent);
2100 unsigned res = pos;
2101 pos += buf.pos;
2102
2103 /* Align up. We're not going to advance into the next page. */
2104 pos += -pos & (SECTION_ALIGN - 1);
2105
2106 return res;
2107 }
2108 #endif
2109
2110 /* Write data and add section. STRING_P is true for a string
2111 section, false for PROGBITS. NAME identifies the section (0 is the
2112 empty name). DATA is the contents. Return section number or 0 on
2113 failure (0 is the undef section). */
2114
2115 unsigned
2116 elf_out::add (const bytes_out &data, bool string_p, unsigned name)
2117 {
2118 unsigned off = write (data);
2119
2120 return add (string_p ? SHT_STRTAB : SHT_PROGBITS, name,
2121 off, data.pos, string_p ? SHF_STRINGS : SHF_NONE);
2122 }
2123
2124 /* Begin writing the file. Initialize the section table and write an
2125 empty header. Return false on failure. */
2126
2127 bool
2128 elf_out::begin ()
2129 {
2130 if (!parent::begin ())
2131 return false;
2132
2133 /* Let the allocators pick a default. */
2134 data::simple_memory.grow (strtab, 0, false);
2135 data::simple_memory.grow (sectab, 0, false);
2136
2137 /* The string table starts with an empty string. */
2138 name ("");
2139
2140 /* Create the UNDEF section. */
2141 add (SHT_NONE);
2142
2143 #if MAPPED_WRITING
2144 /* Start a mapping. */
2145 create_mapping (EXPERIMENT (page_size,
2146 (32767 + page_size) & ~(page_size - 1)));
2147 if (!hdr.buffer)
2148 return false;
2149 #endif
2150
2151 /* Write an empty header. */
2152 grow (hdr, sizeof (header), true);
2153 header *h = reinterpret_cast<header *> (hdr.buffer);
2154 memset (h, 0, sizeof (header));
2155 hdr.pos = hdr.size;
2156 write (hdr);
2157 return !get_error ();
2158 }
2159
2160 /* Finish writing the file. Write out the string & section tables.
2161 Fill in the header. Return true on error. */
2162
2163 bool
2164 elf_out::end ()
2165 {
2166 if (fd >= 0)
2167 {
2168 /* Write the string table. */
2169 unsigned strnam = name (".strtab");
2170 unsigned stroff = write (strtab);
2171 unsigned strndx = add (SHT_STRTAB, strnam, stroff, strtab.pos,
2172 SHF_STRINGS);
2173
2174 /* Store escape values in section[0]. */
2175 if (strndx >= SHN_LORESERVE)
2176 {
2177 reinterpret_cast<section *> (sectab.buffer)->link = strndx;
2178 strndx = SHN_XINDEX;
2179 }
2180 unsigned shnum = sectab.pos / sizeof (section);
2181 if (shnum >= SHN_LORESERVE)
2182 {
2183 reinterpret_cast<section *> (sectab.buffer)->size = shnum;
2184 shnum = SHN_XINDEX;
2185 }
2186
2187 unsigned shoff = write (sectab);
2188
2189 #if MAPPED_WRITING
2190 if (offset)
2191 {
2192 remove_mapping ();
2193 offset = 0;
2194 create_mapping ((sizeof (header) + page_size - 1) & ~(page_size - 1),
2195 false);
2196 }
2197 unsigned length = pos;
2198 #else
2199 if (lseek (fd, 0, SEEK_SET) < 0)
2200 set_error (errno);
2201 #endif
2202 /* Write header. */
2203 if (!get_error ())
2204 {
2205 /* Write the correct header now. */
2206 header *h = reinterpret_cast<header *> (hdr.buffer);
2207 h->ident.magic[0] = 0x7f;
2208 h->ident.magic[1] = 'E'; /* Elrond */
2209 h->ident.magic[2] = 'L'; /* is an */
2210 h->ident.magic[3] = 'F'; /* elf. */
2211 h->ident.klass = MY_CLASS;
2212 h->ident.data = MY_ENDIAN;
2213 h->ident.version = EV_CURRENT;
2214 h->ident.osabi = OSABI_NONE;
2215 h->type = ET_NONE;
2216 h->machine = EM_NONE;
2217 h->version = EV_CURRENT;
2218 h->shoff = shoff;
2219 h->ehsize = sizeof (header);
2220 h->shentsize = sizeof (section);
2221 h->shnum = shnum;
2222 h->shstrndx = strndx;
2223
2224 pos = 0;
2225 write (hdr);
2226 }
2227
2228 #if MAPPED_WRITING
2229 remove_mapping ();
2230 if (ftruncate (fd, length))
2231 set_error (errno);
2232 #endif
2233 }
2234
2235 data::simple_memory.shrink (sectab);
2236 data::simple_memory.shrink (strtab);
2237
2238 return parent::end ();
2239 }
2240
2241 /********************************************************************/
2242
2243 /* A dependency set. This is used during stream out to determine the
2244 connectivity of the graph. Every namespace-scope declaration that
2245 needs writing has a depset. The depset is filled with the (depsets
2246 of) declarations within this module that it references. For a
2247 declaration that'll generally be named types. For definitions
2248 it'll also be declarations in the body.
2249
2250 From that we can convert the graph to a DAG, via determining the
2251 Strongly Connected Clusters. Each cluster is streamed
2252 independently, and thus we achieve lazy loading.
2253
2254 Other decls that get a depset are namespaces themselves and
2255 unnameable declarations. */
2256
2257 class depset {
2258 private:
2259 tree entity; /* Entity, or containing namespace. */
2260 uintptr_t discriminator; /* Flags or identifier. */
2261
2262 public:
2263 /* The kinds of entity the depset could describe. The ordering is
2264 significant, see entity_kind_name. */
2265 enum entity_kind
2266 {
2267 EK_DECL, /* A decl. */
2268 EK_SPECIALIZATION, /* A specialization. */
2269 EK_PARTIAL, /* A partial specialization. */
2270 EK_USING, /* A using declaration (at namespace scope). */
2271 EK_NAMESPACE, /* A namespace. */
2272 EK_REDIRECT, /* Redirect to a template_decl. */
2273 EK_EXPLICIT_HWM,
2274 EK_BINDING = EK_EXPLICIT_HWM, /* Implicitly encoded. */
2275 EK_FOR_BINDING, /* A decl being inserted for a binding. */
2276 EK_INNER_DECL, /* A decl defined outside of it's imported
2277 context. */
2278 EK_DIRECT_HWM = EK_PARTIAL + 1,
2279
2280 EK_BITS = 3 /* Only need to encode below EK_EXPLICIT_HWM. */
2281 };
2282
2283 private:
2284 /* Placement of bit fields in discriminator. */
2285 enum disc_bits
2286 {
2287 DB_ZERO_BIT, /* Set to disambiguate identifier from flags */
2288 DB_SPECIAL_BIT, /* First dep slot is special. */
2289 DB_KIND_BIT, /* Kind of the entity. */
2290 DB_KIND_BITS = EK_BITS,
2291 DB_DEFN_BIT = DB_KIND_BIT + DB_KIND_BITS,
2292 DB_IS_MEMBER_BIT, /* Is an out-of-class member. */
2293 DB_IS_INTERNAL_BIT, /* It is an (erroneous)
2294 internal-linkage entity. */
2295 DB_REFS_INTERNAL_BIT, /* Refers to an internal-linkage
2296 entity. */
2297 DB_IMPORTED_BIT, /* An imported entity. */
2298 DB_UNREACHED_BIT, /* A yet-to-be reached entity. */
2299 DB_HIDDEN_BIT, /* A hidden binding. */
2300 /* The following bits are not independent, but enumerating them is
2301 awkward. */
2302 DB_ALIAS_TMPL_INST_BIT, /* An alias template instantiation. */
2303 DB_ALIAS_SPEC_BIT, /* Specialization of an alias template
2304 (in both spec tables). */
2305 DB_TYPE_SPEC_BIT, /* Specialization in the type table.
2306 */
2307 DB_FRIEND_SPEC_BIT, /* An instantiated template friend. */
2308 };
2309
2310 public:
2311 /* The first slot is special for EK_SPECIALIZATIONS it is a
2312 spec_entry pointer. It is not relevant for the SCC
2313 determination. */
2314 vec<depset *> deps; /* Depsets we reference. */
2315
2316 public:
2317 unsigned cluster; /* Strongly connected cluster, later entity number */
2318 unsigned section; /* Section written to. */
2319 /* During SCC construction, section is lowlink, until the depset is
2320 removed from the stack. See Tarjan algorithm for details. */
2321
2322 private:
2323 /* Construction via factories. Destruction via hash traits. */
2324 depset (tree entity);
2325 ~depset ();
2326
2327 public:
2328 static depset *make_binding (tree, tree);
2329 static depset *make_entity (tree, entity_kind, bool = false);
2330 /* Late setting a binding name -- /then/ insert into hash! */
2331 inline void set_binding_name (tree name)
2332 {
2333 gcc_checking_assert (!get_name ());
2334 discriminator = reinterpret_cast<uintptr_t> (name);
2335 }
2336
2337 private:
2338 template<unsigned I> void set_flag_bit ()
2339 {
2340 gcc_checking_assert (I < 2 || !is_binding ());
2341 discriminator |= 1u << I;
2342 }
2343 template<unsigned I> void clear_flag_bit ()
2344 {
2345 gcc_checking_assert (I < 2 || !is_binding ());
2346 discriminator &= ~(1u << I);
2347 }
2348 template<unsigned I> bool get_flag_bit () const
2349 {
2350 gcc_checking_assert (I < 2 || !is_binding ());
2351 return bool ((discriminator >> I) & 1);
2352 }
2353
2354 public:
2355 bool is_binding () const
2356 {
2357 return !get_flag_bit<DB_ZERO_BIT> ();
2358 }
2359 entity_kind get_entity_kind () const
2360 {
2361 if (is_binding ())
2362 return EK_BINDING;
2363 return entity_kind ((discriminator >> DB_KIND_BIT) & ((1u << EK_BITS) - 1));
2364 }
2365 const char *entity_kind_name () const;
2366
2367 public:
2368 bool has_defn () const
2369 {
2370 return get_flag_bit<DB_DEFN_BIT> ();
2371 }
2372
2373 public:
2374 bool is_member () const
2375 {
2376 return get_flag_bit<DB_IS_MEMBER_BIT> ();
2377 }
2378 public:
2379 bool is_internal () const
2380 {
2381 return get_flag_bit<DB_IS_INTERNAL_BIT> ();
2382 }
2383 bool refs_internal () const
2384 {
2385 return get_flag_bit<DB_REFS_INTERNAL_BIT> ();
2386 }
2387 bool is_import () const
2388 {
2389 return get_flag_bit<DB_IMPORTED_BIT> ();
2390 }
2391 bool is_unreached () const
2392 {
2393 return get_flag_bit<DB_UNREACHED_BIT> ();
2394 }
2395 bool is_alias_tmpl_inst () const
2396 {
2397 return get_flag_bit<DB_ALIAS_TMPL_INST_BIT> ();
2398 }
2399 bool is_alias () const
2400 {
2401 return get_flag_bit<DB_ALIAS_SPEC_BIT> ();
2402 }
2403 bool is_hidden () const
2404 {
2405 return get_flag_bit<DB_HIDDEN_BIT> ();
2406 }
2407 bool is_type_spec () const
2408 {
2409 return get_flag_bit<DB_TYPE_SPEC_BIT> ();
2410 }
2411 bool is_friend_spec () const
2412 {
2413 return get_flag_bit<DB_FRIEND_SPEC_BIT> ();
2414 }
2415
2416 public:
2417 /* We set these bit outside of depset. */
2418 void set_hidden_binding ()
2419 {
2420 set_flag_bit<DB_HIDDEN_BIT> ();
2421 }
2422 void clear_hidden_binding ()
2423 {
2424 clear_flag_bit<DB_HIDDEN_BIT> ();
2425 }
2426
2427 public:
2428 bool is_special () const
2429 {
2430 return get_flag_bit<DB_SPECIAL_BIT> ();
2431 }
2432 void set_special ()
2433 {
2434 set_flag_bit<DB_SPECIAL_BIT> ();
2435 }
2436
2437 public:
2438 tree get_entity () const
2439 {
2440 return entity;
2441 }
2442 tree get_name () const
2443 {
2444 gcc_checking_assert (is_binding ());
2445 return reinterpret_cast <tree> (discriminator);
2446 }
2447
2448 public:
2449 /* Traits for a hash table of pointers to bindings. */
2450 struct traits {
2451 /* Each entry is a pointer to a depset. */
2452 typedef depset *value_type;
2453 /* We lookup by container:maybe-identifier pair. */
2454 typedef std::pair<tree,tree> compare_type;
2455
2456 static const bool empty_zero_p = true;
2457
2458 /* hash and equality for compare_type. */
2459 inline static hashval_t hash (const compare_type &p)
2460 {
2461 hashval_t h = pointer_hash<tree_node>::hash (p.first);
2462 if (p.second)
2463 {
2464 hashval_t nh = IDENTIFIER_HASH_VALUE (p.second);
2465 h = iterative_hash_hashval_t (h, nh);
2466 }
2467 return h;
2468 }
2469 inline static bool equal (const value_type b, const compare_type &p)
2470 {
2471 if (b->entity != p.first)
2472 return false;
2473
2474 if (p.second)
2475 return b->discriminator == reinterpret_cast<uintptr_t> (p.second);
2476 else
2477 return !b->is_binding ();
2478 }
2479
2480 /* (re)hasher for a binding itself. */
2481 inline static hashval_t hash (const value_type b)
2482 {
2483 hashval_t h = pointer_hash<tree_node>::hash (b->entity);
2484 if (b->is_binding ())
2485 {
2486 hashval_t nh = IDENTIFIER_HASH_VALUE (b->get_name ());
2487 h = iterative_hash_hashval_t (h, nh);
2488 }
2489 return h;
2490 }
2491
2492 /* Empty via NULL. */
2493 static inline void mark_empty (value_type &p) {p = NULL;}
2494 static inline bool is_empty (value_type p) {return !p;}
2495
2496 /* Nothing is deletable. Everything is insertable. */
2497 static bool is_deleted (value_type) { return false; }
2498 static void mark_deleted (value_type) { gcc_unreachable (); }
2499
2500 /* We own the entities in the hash table. */
2501 static void remove (value_type p)
2502 {
2503 delete (p);
2504 }
2505 };
2506
2507 public:
2508 class hash : public hash_table<traits> {
2509 typedef traits::compare_type key_t;
2510 typedef hash_table<traits> parent;
2511
2512 public:
2513 vec<depset *> worklist; /* Worklist of decls to walk. */
2514 hash *chain; /* Original table. */
2515 depset *current; /* Current depset being depended. */
2516 unsigned section; /* When writing out, the section. */
2517 bool sneakoscope; /* Detecting dark magic (of a voldemort). */
2518 bool reached_unreached; /* We reached an unreached entity. */
2519
2520 public:
2521 hash (size_t size, hash *c = NULL)
2522 : parent (size), chain (c), current (NULL), section (0),
2523 sneakoscope (false), reached_unreached (false)
2524 {
2525 worklist.create (size);
2526 }
2527 ~hash ()
2528 {
2529 worklist.release ();
2530 }
2531
2532 public:
2533 bool is_key_order () const
2534 {
2535 return chain != NULL;
2536 }
2537
2538 private:
2539 depset **entity_slot (tree entity, bool = true);
2540 depset **binding_slot (tree ctx, tree name, bool = true);
2541 depset *maybe_add_declaration (tree decl);
2542
2543 public:
2544 depset *find_dependency (tree entity);
2545 depset *find_binding (tree ctx, tree name);
2546 depset *make_dependency (tree decl, entity_kind);
2547 void add_dependency (depset *);
2548
2549 public:
2550 void add_mergeable (depset *);
2551 depset *add_dependency (tree decl, entity_kind);
2552 void add_namespace_context (depset *, tree ns);
2553
2554 private:
2555 static bool add_binding_entity (tree, WMB_Flags, void *);
2556
2557 public:
2558 bool add_namespace_entities (tree ns, bitmap partitions);
2559 void add_specializations (bool decl_p);
2560 void add_partial_entities (vec<tree, va_gc> *);
2561 void add_class_entities (vec<tree, va_gc> *);
2562
2563 public:
2564 void find_dependencies ();
2565 bool finalize_dependencies ();
2566 vec<depset *> connect ();
2567 };
2568
2569 public:
2570 struct tarjan {
2571 vec<depset *> result;
2572 vec<depset *> stack;
2573 unsigned index;
2574
2575 tarjan (unsigned size)
2576 : index (0)
2577 {
2578 result.create (size);
2579 stack.create (50);
2580 }
2581 ~tarjan ()
2582 {
2583 gcc_assert (!stack.length ());
2584 stack.release ();
2585 }
2586
2587 public:
2588 void connect (depset *);
2589 };
2590 };
2591
2592 inline
2593 depset::depset (tree entity)
2594 :entity (entity), discriminator (0), cluster (0), section (0)
2595 {
2596 deps.create (0);
2597 }
2598
2599 inline
2600 depset::~depset ()
2601 {
2602 deps.release ();
2603 }
2604
2605 const char *
2606 depset::entity_kind_name () const
2607 {
2608 /* Same order as entity_kind. */
2609 static const char *const names[] =
2610 {"decl", "specialization", "partial", "using",
2611 "namespace", "redirect", "binding"};
2612 entity_kind kind = get_entity_kind ();
2613 gcc_checking_assert (kind < sizeof (names) / sizeof(names[0]));
2614 return names[kind];
2615 }
2616
2617 /* Create a depset for a namespace binding NS::NAME. */
2618
2619 depset *depset::make_binding (tree ns, tree name)
2620 {
2621 depset *binding = new depset (ns);
2622
2623 binding->discriminator = reinterpret_cast <uintptr_t> (name);
2624
2625 return binding;
2626 }
2627
2628 depset *depset::make_entity (tree entity, entity_kind ek, bool is_defn)
2629 {
2630 depset *r = new depset (entity);
2631
2632 r->discriminator = ((1 << DB_ZERO_BIT)
2633 | (ek << DB_KIND_BIT)
2634 | is_defn << DB_DEFN_BIT);
2635
2636 return r;
2637 }
2638
2639 /* Values keyed to some unsigned integer. This is not GTY'd, so if
2640 T is tree they must be reachable via some other path. */
2641
2642 template<typename T>
2643 class uintset {
2644 public:
2645 unsigned key; /* Entity index of the other entity. */
2646
2647 /* Payload. */
2648 unsigned allocp2 : 5; /* log(2) allocated pending */
2649 unsigned num : 27; /* Number of pending. */
2650
2651 /* Trailing array of values. */
2652 T values[1];
2653
2654 public:
2655 /* Even with ctors, we're very pod-like. */
2656 uintset (unsigned uid)
2657 : key (uid), allocp2 (0), num (0)
2658 {
2659 }
2660 /* Copy constructor, which is exciting because of the trailing
2661 array. */
2662 uintset (const uintset *from)
2663 {
2664 size_t size = (offsetof (uintset, values)
2665 + sizeof (uintset::values) * from->num);
2666 memmove (this, from, size);
2667 if (from->num)
2668 allocp2++;
2669 }
2670
2671 public:
2672 struct traits : delete_ptr_hash<uintset> {
2673 typedef unsigned compare_type;
2674 typedef typename delete_ptr_hash<uintset>::value_type value_type;
2675
2676 /* Hash and equality for compare_type. */
2677 inline static hashval_t hash (const compare_type k)
2678 {
2679 return hashval_t (k);
2680 }
2681 inline static hashval_t hash (const value_type v)
2682 {
2683 return hash (v->key);
2684 }
2685
2686 inline static bool equal (const value_type v, const compare_type k)
2687 {
2688 return v->key == k;
2689 }
2690 };
2691
2692 public:
2693 class hash : public hash_table<traits>
2694 {
2695 typedef typename traits::compare_type key_t;
2696 typedef hash_table<traits> parent;
2697
2698 public:
2699 hash (size_t size)
2700 : parent (size)
2701 {
2702 }
2703 ~hash ()
2704 {
2705 }
2706
2707 private:
2708 uintset **find_slot (key_t key, insert_option insert)
2709 {
2710 return this->find_slot_with_hash (key, traits::hash (key), insert);
2711 }
2712
2713 public:
2714 uintset *get (key_t key, bool extract = false);
2715 bool add (key_t key, T value);
2716 uintset *create (key_t key, unsigned num, T init = 0);
2717 };
2718 };
2719
2720 /* Add VALUE to KEY's uintset, creating it if necessary. Returns true
2721 if we created the uintset. */
2722
2723 template<typename T>
2724 bool
2725 uintset<T>::hash::add (typename uintset<T>::hash::key_t key, T value)
2726 {
2727 uintset **slot = this->find_slot (key, INSERT);
2728 uintset *set = *slot;
2729 bool is_new = !set;
2730
2731 if (is_new || set->num == (1u << set->allocp2))
2732 {
2733 if (set)
2734 {
2735 unsigned n = set->num * 2;
2736 size_t new_size = (offsetof (uintset, values)
2737 + sizeof (uintset (0u).values) * n);
2738 uintset *new_set = new (::operator new (new_size)) uintset (set);
2739 delete set;
2740 set = new_set;
2741 }
2742 else
2743 set = new (::operator new (sizeof (*set))) uintset (key);
2744 *slot = set;
2745 }
2746
2747 set->values[set->num++] = value;
2748
2749 return is_new;
2750 }
2751
2752 template<typename T>
2753 uintset<T> *
2754 uintset<T>::hash::create (typename uintset<T>::hash::key_t key, unsigned num,
2755 T init)
2756 {
2757 unsigned p2alloc = 0;
2758 for (unsigned v = num; v != 1; v = (v >> 1) | (v & 1))
2759 p2alloc++;
2760
2761 size_t new_size = (offsetof (uintset, values)
2762 + (sizeof (uintset (0u).values) << p2alloc));
2763 uintset *set = new (::operator new (new_size)) uintset (key);
2764 set->allocp2 = p2alloc;
2765 set->num = num;
2766 while (num--)
2767 set->values[num] = init;
2768
2769 uintset **slot = this->find_slot (key, INSERT);
2770 gcc_checking_assert (!*slot);
2771 *slot = set;
2772
2773 return set;
2774 }
2775
2776 /* Locate KEY's uintset, potentially removing it from the hash table */
2777
2778 template<typename T>
2779 uintset<T> *
2780 uintset<T>::hash::get (typename uintset<T>::hash::key_t key, bool extract)
2781 {
2782 uintset *res = NULL;
2783
2784 if (uintset **slot = this->find_slot (key, NO_INSERT))
2785 {
2786 res = *slot;
2787 if (extract)
2788 /* We need to remove the pendset without deleting it. */
2789 traits::mark_deleted (*slot);
2790 }
2791
2792 return res;
2793 }
2794
2795 /* Entities keyed to some other entity. When we load the other
2796 entity, we mark it in some way to indicate there are further
2797 entities to load when you start looking inside it. For instance
2798 template specializations are keyed to their most general template.
2799 When we instantiate that, we need to know all the partial
2800 specializations (to pick the right template), and all the known
2801 specializations (to avoid reinstantiating it, and/or whether it's
2802 extern). The values split into two ranges. If !MSB set, indices
2803 into the entity array. If MSB set, an indirection to another
2804 pendset. */
2805
2806 typedef uintset<unsigned> pendset;
2807 static pendset::hash *pending_table;
2808
2809 /* Some entities are attached to another entitity for ODR purposes.
2810 For example, at namespace scope, 'inline auto var = []{};', that
2811 lambda is attached to 'var', and follows its ODRness. */
2812 typedef uintset<tree> attachset;
2813 static attachset::hash *attached_table;
2814
2815 /********************************************************************/
2816 /* Tree streaming. The tree streaming is very specific to the tree
2817 structures themselves. A tag indicates the kind of tree being
2818 streamed. -ve tags indicate backreferences to already-streamed
2819 trees. Backreferences are auto-numbered. */
2820
2821 /* Tree tags. */
2822 enum tree_tag {
2823 tt_null, /* NULL_TREE. */
2824 tt_fixed, /* Fixed vector index. */
2825
2826 tt_node, /* By-value node. */
2827 tt_decl, /* By-value mergeable decl. */
2828 tt_tpl_parm, /* Template parm. */
2829
2830 /* The ordering of the following 4 is relied upon in
2831 trees_out::tree_node. */
2832 tt_id, /* Identifier node. */
2833 tt_conv_id, /* Conversion operator name. */
2834 tt_anon_id, /* Anonymous name. */
2835 tt_lambda_id, /* Lambda name. */
2836
2837 tt_typedef_type, /* A (possibly implicit) typedefed type. */
2838 tt_derived_type, /* A type derived from another type. */
2839 tt_variant_type, /* A variant of another type. */
2840
2841 tt_tinfo_var, /* Typeinfo object. */
2842 tt_tinfo_typedef, /* Typeinfo typedef. */
2843 tt_ptrmem_type, /* Pointer to member type. */
2844
2845 tt_parm, /* Function parameter or result. */
2846 tt_enum_value, /* An enum value. */
2847 tt_enum_decl, /* An enum decl. */
2848 tt_data_member, /* Data member/using-decl. */
2849
2850 tt_binfo, /* A BINFO. */
2851 tt_vtable, /* A vtable. */
2852 tt_thunk, /* A thunk. */
2853 tt_clone_ref,
2854
2855 tt_entity, /* A extra-cluster entity. */
2856
2857 tt_template, /* The TEMPLATE_RESULT of a template. */
2858 };
2859
2860 enum walk_kind {
2861 WK_none, /* No walk to do (a back- or fixed-ref happened). */
2862 WK_normal, /* Normal walk (by-name if possible). */
2863
2864 WK_value, /* By-value walk. */
2865 };
2866
2867 enum merge_kind
2868 {
2869 MK_unique, /* Known unique. */
2870 MK_named, /* Found by CTX, NAME + maybe_arg types etc. */
2871 MK_field, /* Found by CTX and index on TYPE_FIELDS */
2872 MK_vtable, /* Found by CTX and index on TYPE_VTABLES */
2873 MK_as_base, /* Found by CTX. */
2874
2875 MK_partial,
2876
2877 MK_enum, /* Found by CTX, & 1stMemberNAME. */
2878 MK_attached, /* Found by attachee & index. */
2879
2880 MK_friend_spec, /* Like named, but has a tmpl & args too. */
2881 MK_local_friend, /* Found by CTX, index. */
2882
2883 MK_indirect_lwm = MK_enum,
2884
2885 /* Template specialization kinds below. These are all found via
2886 primary template and specialization args. */
2887 MK_template_mask = 0x10, /* A template specialization. */
2888
2889 MK_tmpl_decl_mask = 0x4, /* In decl table. */
2890 MK_tmpl_alias_mask = 0x2, /* Also in type table */
2891
2892 MK_tmpl_tmpl_mask = 0x1, /* We want TEMPLATE_DECL. */
2893
2894 MK_type_spec = MK_template_mask,
2895 MK_type_tmpl_spec = MK_type_spec | MK_tmpl_tmpl_mask,
2896
2897 MK_decl_spec = MK_template_mask | MK_tmpl_decl_mask,
2898 MK_decl_tmpl_spec = MK_decl_spec | MK_tmpl_tmpl_mask,
2899
2900 MK_alias_spec = MK_decl_spec | MK_tmpl_alias_mask,
2901
2902 MK_hwm = 0x20
2903 };
2904 /* This is more than a debugging array. NULLs are used to determine
2905 an invalid merge_kind number. */
2906 static char const *const merge_kind_name[MK_hwm] =
2907 {
2908 "unique", "named", "field", "vtable", /* 0...3 */
2909 "asbase", "partial", "enum", "attached", /* 4...7 */
2910
2911 "friend spec", "local friend", NULL, NULL, /* 8...11 */
2912 NULL, NULL, NULL, NULL,
2913
2914 "type spec", "type tmpl spec", /* 16,17 type (template). */
2915 NULL, NULL,
2916
2917 "decl spec", "decl tmpl spec", /* 20,21 decl (template). */
2918 "alias spec", NULL, /* 22,23 alias. */
2919 NULL, NULL, NULL, NULL,
2920 NULL, NULL, NULL, NULL,
2921 };
2922
2923 /* Mergeable entity location data. */
2924 struct merge_key {
2925 cp_ref_qualifier ref_q : 2;
2926 unsigned index;
2927
2928 tree ret; /* Return type, if appropriate. */
2929 tree args; /* Arg types, if appropriate. */
2930
2931 tree constraints; /* Constraints. */
2932
2933 merge_key ()
2934 :ref_q (REF_QUAL_NONE), index (0),
2935 ret (NULL_TREE), args (NULL_TREE),
2936 constraints (NULL_TREE)
2937 {
2938 }
2939 };
2940
2941 struct duplicate_hash : nodel_ptr_hash<tree_node>
2942 {
2943 inline static hashval_t hash (value_type decl)
2944 {
2945 if (TREE_CODE (decl) == TREE_BINFO)
2946 decl = TYPE_NAME (BINFO_TYPE (decl));
2947 return hashval_t (DECL_UID (decl));
2948 }
2949 };
2950
2951 /* Hashmap of merged duplicates. Usually decls, but can contain
2952 BINFOs. */
2953 typedef hash_map<tree,uintptr_t,
2954 simple_hashmap_traits<duplicate_hash,uintptr_t> >
2955 duplicate_hash_map;
2956
2957 /* Tree stream reader. Note that reading a stream doesn't mark the
2958 read trees with TREE_VISITED. Thus it's quite safe to have
2959 multiple concurrent readers. Which is good, because lazy
2960 loading. */
2961 class trees_in : public bytes_in {
2962 typedef bytes_in parent;
2963
2964 private:
2965 module_state *state; /* Module being imported. */
2966 vec<tree> back_refs; /* Back references. */
2967 duplicate_hash_map *duplicates; /* Map from existings to duplicate. */
2968 vec<tree> post_decls; /* Decls to post process. */
2969 unsigned unused; /* Inhibit any interior TREE_USED
2970 marking. */
2971
2972 public:
2973 trees_in (module_state *);
2974 ~trees_in ();
2975
2976 public:
2977 int insert (tree);
2978 tree back_ref (int);
2979
2980 private:
2981 tree start (unsigned = 0);
2982
2983 public:
2984 /* Needed for binfo writing */
2985 bool core_bools (tree);
2986
2987 private:
2988 /* Stream tree_core, lang_decl_specific and lang_type_specific
2989 bits. */
2990 bool core_vals (tree);
2991 bool lang_type_bools (tree);
2992 bool lang_type_vals (tree);
2993 bool lang_decl_bools (tree);
2994 bool lang_decl_vals (tree);
2995 bool lang_vals (tree);
2996 bool tree_node_bools (tree);
2997 bool tree_node_vals (tree);
2998 tree tree_value ();
2999 tree decl_value ();
3000 tree tpl_parm_value ();
3001
3002 private:
3003 tree chained_decls (); /* Follow DECL_CHAIN. */
3004 vec<tree, va_heap> *vec_chained_decls ();
3005 vec<tree, va_gc> *tree_vec (); /* vec of tree. */
3006 vec<tree_pair_s, va_gc> *tree_pair_vec (); /* vec of tree_pair. */
3007 tree tree_list (bool has_purpose);
3008
3009 public:
3010 /* Read a tree node. */
3011 tree tree_node (bool is_use = false);
3012
3013 private:
3014 bool install_entity (tree decl);
3015 tree tpl_parms (unsigned &tpl_levels);
3016 bool tpl_parms_fini (tree decl, unsigned tpl_levels);
3017 bool tpl_header (tree decl, unsigned *tpl_levels);
3018 int fn_parms_init (tree);
3019 void fn_parms_fini (int tag, tree fn, tree existing, bool has_defn);
3020 unsigned add_indirect_tpl_parms (tree);
3021 public:
3022 bool add_indirects (tree);
3023
3024 public:
3025 /* Serialize various definitions. */
3026 bool read_definition (tree decl);
3027
3028 private:
3029 bool is_matching_decl (tree existing, tree decl);
3030 static bool install_implicit_member (tree decl);
3031 bool read_function_def (tree decl, tree maybe_template);
3032 bool read_var_def (tree decl, tree maybe_template);
3033 bool read_class_def (tree decl, tree maybe_template);
3034 bool read_enum_def (tree decl, tree maybe_template);
3035
3036 public:
3037 tree decl_container ();
3038 tree key_mergeable (int tag, merge_kind, tree decl, tree inner, tree type,
3039 tree container, bool is_mod);
3040 unsigned binfo_mergeable (tree *);
3041
3042 private:
3043 uintptr_t *find_duplicate (tree existing);
3044 void register_duplicate (tree decl, tree existing);
3045 /* Mark as an already diagnosed bad duplicate. */
3046 void unmatched_duplicate (tree existing)
3047 {
3048 *find_duplicate (existing) |= 1;
3049 }
3050
3051 public:
3052 bool is_duplicate (tree decl)
3053 {
3054 return find_duplicate (decl) != NULL;
3055 }
3056 tree maybe_duplicate (tree decl)
3057 {
3058 if (uintptr_t *dup = find_duplicate (decl))
3059 return reinterpret_cast<tree> (*dup & ~uintptr_t (1));
3060 return decl;
3061 }
3062 tree odr_duplicate (tree decl, bool has_defn);
3063
3064 public:
3065 /* Return the next decl to postprocess, or NULL. */
3066 tree post_process ()
3067 {
3068 return post_decls.length () ? post_decls.pop () : NULL_TREE;
3069 }
3070 private:
3071 /* Register DECL for postprocessing. */
3072 void post_process (tree decl)
3073 {
3074 post_decls.safe_push (decl);
3075 }
3076
3077 private:
3078 void assert_definition (tree, bool installing);
3079 };
3080
3081 trees_in::trees_in (module_state *state)
3082 :parent (), state (state), unused (0)
3083 {
3084 duplicates = NULL;
3085 back_refs.create (500);
3086 post_decls.create (0);
3087 }
3088
3089 trees_in::~trees_in ()
3090 {
3091 delete (duplicates);
3092 back_refs.release ();
3093 post_decls.release ();
3094 }
3095
3096 /* Tree stream writer. */
3097 class trees_out : public bytes_out {
3098 typedef bytes_out parent;
3099
3100 private:
3101 module_state *state; /* The module we are writing. */
3102 ptr_int_hash_map tree_map; /* Trees to references */
3103 depset::hash *dep_hash; /* Dependency table. */
3104 int ref_num; /* Back reference number. */
3105 unsigned section;
3106 #if CHECKING_P
3107 int importedness; /* Checker that imports not occurring
3108 inappropriately. */
3109 #endif
3110
3111 public:
3112 trees_out (allocator *, module_state *, depset::hash &deps, unsigned sec = 0);
3113 ~trees_out ();
3114
3115 private:
3116 void mark_trees ();
3117 void unmark_trees ();
3118
3119 public:
3120 /* Hey, let's ignore the well known STL iterator idiom. */
3121 void begin ();
3122 unsigned end (elf_out *sink, unsigned name, unsigned *crc_ptr);
3123 void end ();
3124
3125 public:
3126 enum tags
3127 {
3128 tag_backref = -1, /* Upper bound on the backrefs. */
3129 tag_value = 0, /* Write by value. */
3130 tag_fixed /* Lower bound on the fixed trees. */
3131 };
3132
3133 public:
3134 bool is_key_order () const
3135 {
3136 return dep_hash->is_key_order ();
3137 }
3138
3139 public:
3140 int insert (tree, walk_kind = WK_normal);
3141
3142 private:
3143 void start (tree, bool = false);
3144
3145 private:
3146 walk_kind ref_node (tree);
3147 public:
3148 int get_tag (tree);
3149 void set_importing (int i ATTRIBUTE_UNUSED)
3150 {
3151 #if CHECKING_P
3152 importedness = i;
3153 #endif
3154 }
3155
3156 private:
3157 void core_bools (tree);
3158 void core_vals (tree);
3159 void lang_type_bools (tree);
3160 void lang_type_vals (tree);
3161 void lang_decl_bools (tree);
3162 void lang_decl_vals (tree);
3163 void lang_vals (tree);
3164 void tree_node_bools (tree);
3165 void tree_node_vals (tree);
3166
3167 private:
3168 void chained_decls (tree);
3169 void vec_chained_decls (tree);
3170 void tree_vec (vec<tree, va_gc> *);
3171 void tree_pair_vec (vec<tree_pair_s, va_gc> *);
3172 void tree_list (tree, bool has_purpose);
3173
3174 public:
3175 /* Mark a node for by-value walking. */
3176 void mark_by_value (tree);
3177
3178 public:
3179 void tree_node (tree);
3180
3181 private:
3182 void install_entity (tree decl, depset *);
3183 void tpl_parms (tree parms, unsigned &tpl_levels);
3184 void tpl_parms_fini (tree decl, unsigned tpl_levels);
3185 void fn_parms_fini (tree) {}
3186 unsigned add_indirect_tpl_parms (tree);
3187 public:
3188 void add_indirects (tree);
3189 void fn_parms_init (tree);
3190 void tpl_header (tree decl, unsigned *tpl_levels);
3191
3192 public:
3193 merge_kind get_merge_kind (tree decl, depset *maybe_dep);
3194 tree decl_container (tree decl);
3195 void key_mergeable (int tag, merge_kind, tree decl, tree inner,
3196 tree container, depset *maybe_dep);
3197 void binfo_mergeable (tree binfo);
3198
3199 private:
3200 bool decl_node (tree, walk_kind ref);
3201 void type_node (tree);
3202 void tree_value (tree);
3203 void tpl_parm_value (tree);
3204
3205 public:
3206 void decl_value (tree, depset *);
3207
3208 public:
3209 /* Serialize various definitions. */
3210 void write_definition (tree decl);
3211 void mark_declaration (tree decl, bool do_defn);
3212
3213 private:
3214 void mark_function_def (tree decl);
3215 void mark_var_def (tree decl);
3216 void mark_class_def (tree decl);
3217 void mark_enum_def (tree decl);
3218 void mark_class_member (tree decl, bool do_defn = true);
3219 void mark_binfos (tree type);
3220
3221 private:
3222 void write_var_def (tree decl);
3223 void write_function_def (tree decl);
3224 void write_class_def (tree decl);
3225 void write_enum_def (tree decl);
3226
3227 private:
3228 static void assert_definition (tree);
3229
3230 public:
3231 static void instrument ();
3232
3233 private:
3234 /* Tree instrumentation. */
3235 static unsigned tree_val_count;
3236 static unsigned decl_val_count;
3237 static unsigned back_ref_count;
3238 static unsigned null_count;
3239 };
3240
3241 /* Instrumentation counters. */
3242 unsigned trees_out::tree_val_count;
3243 unsigned trees_out::decl_val_count;
3244 unsigned trees_out::back_ref_count;
3245 unsigned trees_out::null_count;
3246
3247 trees_out::trees_out (allocator *mem, module_state *state, depset::hash &deps,
3248 unsigned section)
3249 :parent (mem), state (state), tree_map (500),
3250 dep_hash (&deps), ref_num (0), section (section)
3251 {
3252 #if CHECKING_P
3253 importedness = 0;
3254 #endif
3255 }
3256
3257 trees_out::~trees_out ()
3258 {
3259 }
3260
3261 /********************************************************************/
3262 /* Location. We're aware of the line-map concept and reproduce it
3263 here. Each imported module allocates a contiguous span of ordinary
3264 maps, and of macro maps. adhoc maps are serialized by contents,
3265 not pre-allocated. The scattered linemaps of a module are
3266 coalesced when writing. */
3267
3268
3269 /* I use half-open [first,second) ranges. */
3270 typedef std::pair<unsigned,unsigned> range_t;
3271
3272 /* A range of locations. */
3273 typedef std::pair<location_t,location_t> loc_range_t;
3274
3275 /* Spans of the line maps that are occupied by this TU. I.e. not
3276 within imports. Only extended when in an interface unit.
3277 Interval zero corresponds to the forced header linemap(s). This
3278 is a singleton object. */
3279
3280 class loc_spans {
3281 public:
3282 /* An interval of line maps. The line maps here represent a contiguous
3283 non-imported range. */
3284 struct span {
3285 loc_range_t ordinary; /* Ordinary map location range. */
3286 loc_range_t macro; /* Macro map location range. */
3287 int ordinary_delta; /* Add to ordinary loc to get serialized loc. */
3288 int macro_delta; /* Likewise for macro loc. */
3289 };
3290
3291 private:
3292 vec<span> *spans;
3293
3294 public:
3295 loc_spans ()
3296 /* Do not preallocate spans, as that causes
3297 --enable-detailed-mem-stats problems. */
3298 : spans (nullptr)
3299 {
3300 }
3301 ~loc_spans ()
3302 {
3303 delete spans;
3304 }
3305
3306 public:
3307 span &operator[] (unsigned ix)
3308 {
3309 return (*spans)[ix];
3310 }
3311 unsigned length () const
3312 {
3313 return spans->length ();
3314 }
3315
3316 public:
3317 bool init_p () const
3318 {
3319 return spans != nullptr;
3320 }
3321 /* Initializer. */
3322 void init (const line_maps *lmaps, const line_map_ordinary *map);
3323
3324 /* Slightly skewed preprocessed files can cause us to miss an
3325 initialization in some places. Fallback initializer. */
3326 void maybe_init ()
3327 {
3328 if (!init_p ())
3329 init (line_table, nullptr);
3330 }
3331
3332 public:
3333 enum {
3334 SPAN_RESERVED = 0, /* Reserved (fixed) locations. */
3335 SPAN_FIRST = 1, /* LWM of locations to stream */
3336 SPAN_MAIN = 2 /* Main file and onwards. */
3337 };
3338
3339 public:
3340 location_t main_start () const
3341 {
3342 return (*spans)[SPAN_MAIN].ordinary.first;
3343 }
3344
3345 public:
3346 void open (location_t);
3347 void close ();
3348
3349 public:
3350 /* Propagate imported linemaps to us, if needed. */
3351 bool maybe_propagate (module_state *import, location_t loc);
3352
3353 public:
3354 const span *ordinary (location_t);
3355 const span *macro (location_t);
3356 };
3357
3358 static loc_spans spans;
3359
3360 /********************************************************************/
3361 /* Data needed by a module during the process of loading. */
3362 struct GTY(()) slurping {
3363
3364 /* Remap import's module numbering to our numbering. Values are
3365 shifted by 1. Bit0 encodes if the import is direct. */
3366 vec<unsigned, va_heap, vl_embed> *
3367 GTY((skip)) remap; /* Module owner remapping. */
3368
3369 elf_in *GTY((skip)) from; /* The elf loader. */
3370
3371 /* This map is only for header imports themselves -- the global
3372 headers bitmap hold it for the current TU. */
3373 bitmap headers; /* Transitive set of direct imports, including
3374 self. Used for macro visibility and
3375 priority. */
3376
3377 /* These objects point into the mmapped area, unless we're not doing
3378 that, or we got frozen or closed. In those cases they point to
3379 buffers we own. */
3380 bytes_in macro_defs; /* Macro definitions. */
3381 bytes_in macro_tbl; /* Macro table. */
3382
3383 /* Location remapping. first->ordinary, second->macro. */
3384 range_t GTY((skip)) loc_deltas;
3385
3386 unsigned current; /* Section currently being loaded. */
3387 unsigned remaining; /* Number of lazy sections yet to read. */
3388 unsigned lru; /* An LRU counter. */
3389
3390 public:
3391 slurping (elf_in *);
3392 ~slurping ();
3393
3394 public:
3395 /* Close the ELF file, if it's open. */
3396 void close ()
3397 {
3398 if (from)
3399 {
3400 from->end ();
3401 delete from;
3402 from = NULL;
3403 }
3404 }
3405
3406 public:
3407 void release_macros ();
3408
3409 public:
3410 void alloc_remap (unsigned size)
3411 {
3412 gcc_assert (!remap);
3413 vec_safe_reserve (remap, size);
3414 for (unsigned ix = size; ix--;)
3415 remap->quick_push (0);
3416 }
3417 unsigned remap_module (unsigned owner)
3418 {
3419 if (owner < remap->length ())
3420 return (*remap)[owner] >> 1;
3421 return 0;
3422 }
3423
3424 public:
3425 /* GC allocation. But we must explicitly delete it. */
3426 static void *operator new (size_t x)
3427 {
3428 return ggc_alloc_atomic (x);
3429 }
3430 static void operator delete (void *p)
3431 {
3432 ggc_free (p);
3433 }
3434 };
3435
3436 slurping::slurping (elf_in *from)
3437 : remap (NULL), from (from),
3438 headers (BITMAP_GGC_ALLOC ()), macro_defs (), macro_tbl (),
3439 loc_deltas (0, 0),
3440 current (~0u), remaining (0), lru (0)
3441 {
3442 }
3443
3444 slurping::~slurping ()
3445 {
3446 vec_free (remap);
3447 remap = NULL;
3448 release_macros ();
3449 close ();
3450 }
3451
3452 void slurping::release_macros ()
3453 {
3454 if (macro_defs.size)
3455 elf_in::release (from, macro_defs);
3456 if (macro_tbl.size)
3457 elf_in::release (from, macro_tbl);
3458 }
3459
3460 /* Information about location maps used during writing. */
3461
3462 struct location_map_info {
3463 range_t num_maps;
3464
3465 unsigned max_range;
3466 };
3467
3468 /* Flage for extensions that end up being streamed. */
3469
3470 enum streamed_extensions {
3471 SE_OPENMP = 1 << 0,
3472 SE_BITS = 1
3473 };
3474
3475 /********************************************************************/
3476 struct module_state_config;
3477
3478 /* Increasing levels of loadedness. */
3479 enum module_loadedness {
3480 ML_NONE, /* Not loaded. */
3481 ML_CONFIG, /* Config loaed. */
3482 ML_PREPROCESSOR, /* Preprocessor loaded. */
3483 ML_LANGUAGE, /* Language loaded. */
3484 };
3485
3486 /* Increasing levels of directness (toplevel) of import. */
3487 enum module_directness {
3488 MD_NONE, /* Not direct. */
3489 MD_PARTITION_DIRECT, /* Direct import of a partition. */
3490 MD_DIRECT, /* Direct import. */
3491 MD_PURVIEW_DIRECT, /* direct import in purview. */
3492 };
3493
3494 /* State of a particular module. */
3495
3496 class GTY((chain_next ("%h.parent"), for_user)) module_state {
3497 public:
3498 /* We always import & export ourselves. */
3499 bitmap imports; /* Transitive modules we're importing. */
3500 bitmap exports; /* Subset of that, that we're exporting. */
3501
3502 module_state *parent;
3503 tree name; /* Name of the module. */
3504
3505 slurping *slurp; /* Data for loading. */
3506
3507 const char *flatname; /* Flatname of module. */
3508 char *filename; /* CMI Filename */
3509
3510 /* Indices into the entity_ary. */
3511 unsigned entity_lwm;
3512 unsigned entity_num;
3513
3514 /* Location ranges for this module. adhoc-locs are decomposed, so
3515 don't have a range. */
3516 loc_range_t GTY((skip)) ordinary_locs;
3517 loc_range_t GTY((skip)) macro_locs;
3518
3519 /* LOC is first set too the importing location. When initially
3520 loaded it refers to a module loc whose parent is the importing
3521 location. */
3522 location_t loc; /* Location referring to module itself. */
3523 unsigned crc; /* CRC we saw reading it in. */
3524
3525 unsigned mod; /* Module owner number. */
3526 unsigned remap; /* Remapping during writing. */
3527
3528 unsigned short subst; /* Mangle subst if !0. */
3529
3530 /* How loaded this module is. */
3531 enum module_loadedness loadedness : 2;
3532
3533 bool module_p : 1; /* /The/ module of this TU. */
3534 bool header_p : 1; /* Is a header unit. */
3535 bool interface_p : 1; /* An interface. */
3536 bool partition_p : 1; /* A partition. */
3537
3538 /* How directly this module is imported. */
3539 enum module_directness directness : 2;
3540
3541 bool exported_p : 1; /* directness != MD_NONE && exported. */
3542 bool cmi_noted_p : 1; /* We've told the user about the CMI, don't
3543 do it again */
3544 bool call_init_p : 1; /* This module's global initializer needs
3545 calling. */
3546 /* Record extensions emitted or permitted. */
3547 unsigned extensions : SE_BITS;
3548 /* 12 bits used, 4 bits remain */
3549
3550 public:
3551 module_state (tree name, module_state *, bool);
3552 ~module_state ();
3553
3554 public:
3555 void release ()
3556 {
3557 imports = exports = NULL;
3558 slurped ();
3559 }
3560 void slurped ()
3561 {
3562 delete slurp;
3563 slurp = NULL;
3564 }
3565 elf_in *from () const
3566 {
3567 return slurp->from;
3568 }
3569
3570 public:
3571 /* Kind of this module. */
3572 bool is_module () const
3573 {
3574 return module_p;
3575 }
3576 bool is_header () const
3577 {
3578 return header_p;
3579 }
3580 bool is_interface () const
3581 {
3582 return interface_p;
3583 }
3584 bool is_partition () const
3585 {
3586 return partition_p;
3587 }
3588
3589 /* How this module is used in the current TU. */
3590 bool is_exported () const
3591 {
3592 return exported_p;
3593 }
3594 bool is_direct () const
3595 {
3596 return directness >= MD_DIRECT;
3597 }
3598 bool is_purview_direct () const
3599 {
3600 return directness == MD_PURVIEW_DIRECT;
3601 }
3602 bool is_partition_direct () const
3603 {
3604 return directness == MD_PARTITION_DIRECT;
3605 }
3606
3607 public:
3608 /* Is this not a real module? */
3609 bool is_rooted () const
3610 {
3611 return loc != UNKNOWN_LOCATION;
3612 }
3613
3614 public:
3615 bool check_not_purview (location_t loc);
3616
3617 public:
3618 void mangle (bool include_partition);
3619
3620 public:
3621 void set_import (module_state const *, bool is_export);
3622 void announce (const char *) const;
3623
3624 public:
3625 /* Read and write module. */
3626 void write (elf_out *to, cpp_reader *);
3627 bool read_initial (cpp_reader *);
3628 bool read_preprocessor (bool);
3629 bool read_language (bool);
3630
3631 public:
3632 /* Read a section. */
3633 bool load_section (unsigned snum, binding_slot *mslot);
3634 /* Lazily read a section. */
3635 bool lazy_load (unsigned index, binding_slot *mslot);
3636
3637 public:
3638 /* Juggle a limited number of file numbers. */
3639 static void freeze_an_elf ();
3640 bool maybe_defrost ();
3641
3642 public:
3643 void maybe_completed_reading ();
3644 bool check_read (bool outermost, bool ok);
3645
3646 private:
3647 /* The README, for human consumption. */
3648 void write_readme (elf_out *to, cpp_reader *,
3649 const char *dialect, unsigned extensions);
3650 void write_env (elf_out *to);
3651
3652 private:
3653 /* Import tables. */
3654 void write_imports (bytes_out &cfg, bool direct);
3655 unsigned read_imports (bytes_in &cfg, cpp_reader *, line_maps *maps);
3656
3657 private:
3658 void write_imports (elf_out *to, unsigned *crc_ptr);
3659 bool read_imports (cpp_reader *, line_maps *);
3660
3661 private:
3662 void write_partitions (elf_out *to, unsigned, unsigned *crc_ptr);
3663 bool read_partitions (unsigned);
3664
3665 private:
3666 void write_config (elf_out *to, struct module_state_config &, unsigned crc);
3667 bool read_config (struct module_state_config &);
3668 static void write_counts (elf_out *to, unsigned [], unsigned *crc_ptr);
3669 bool read_counts (unsigned []);
3670
3671 public:
3672 void note_cmi_name ();
3673
3674 private:
3675 static unsigned write_bindings (elf_out *to, vec<depset *> depsets,
3676 unsigned *crc_ptr);
3677 bool read_bindings (unsigned count, unsigned lwm, unsigned hwm);
3678
3679 static void write_namespace (bytes_out &sec, depset *ns_dep);
3680 tree read_namespace (bytes_in &sec);
3681
3682 void write_namespaces (elf_out *to, vec<depset *> spaces,
3683 unsigned, unsigned *crc_ptr);
3684 bool read_namespaces (unsigned);
3685
3686 unsigned write_cluster (elf_out *to, depset *depsets[], unsigned size,
3687 depset::hash &, unsigned *counts, unsigned *crc_ptr);
3688 bool read_cluster (unsigned snum);
3689
3690 private:
3691 unsigned write_inits (elf_out *to, depset::hash &, unsigned *crc_ptr);
3692 bool read_inits (unsigned count);
3693
3694 private:
3695 void write_pendings (elf_out *to, vec<depset *> depsets,
3696 depset::hash &, unsigned count, unsigned *crc_ptr);
3697 bool read_pendings (unsigned count);
3698
3699 private:
3700 void write_entities (elf_out *to, vec<depset *> depsets,
3701 unsigned count, unsigned *crc_ptr);
3702 bool read_entities (unsigned count, unsigned lwm, unsigned hwm);
3703
3704 private:
3705 location_map_info write_prepare_maps (module_state_config *);
3706 bool read_prepare_maps (const module_state_config *);
3707
3708 void write_ordinary_maps (elf_out *to, location_map_info &,
3709 module_state_config *, bool, unsigned *crc_ptr);
3710 bool read_ordinary_maps ();
3711 void write_macro_maps (elf_out *to, location_map_info &,
3712 module_state_config *, unsigned *crc_ptr);
3713 bool read_macro_maps ();
3714
3715 private:
3716 void write_define (bytes_out &, const cpp_macro *, bool located = true);
3717 cpp_macro *read_define (bytes_in &, cpp_reader *, bool located = true) const;
3718 unsigned write_macros (elf_out *to, cpp_reader *, unsigned *crc_ptr);
3719 bool read_macros ();
3720 void install_macros ();
3721
3722 public:
3723 void import_macros ();
3724
3725 public:
3726 static void undef_macro (cpp_reader *, location_t, cpp_hashnode *);
3727 static cpp_macro *deferred_macro (cpp_reader *, location_t, cpp_hashnode *);
3728
3729 public:
3730 static void write_location (bytes_out &, location_t);
3731 location_t read_location (bytes_in &) const;
3732
3733 public:
3734 void set_flatname ();
3735 const char *get_flatname () const
3736 {
3737 return flatname;
3738 }
3739 location_t imported_from () const;
3740
3741 public:
3742 void set_filename (const Cody::Packet &);
3743 bool do_import (cpp_reader *, bool outermost);
3744 };
3745
3746 /* Hash module state by name. This cannot be a member of
3747 module_state, because of GTY restrictions. We never delete from
3748 the hash table, but ggc_ptr_hash doesn't support that
3749 simplification. */
3750
3751 struct module_state_hash : ggc_ptr_hash<module_state> {
3752 typedef std::pair<tree,uintptr_t> compare_type; /* {name,parent} */
3753
3754 static inline hashval_t hash (const value_type m);
3755 static inline hashval_t hash (const compare_type &n);
3756 static inline bool equal (const value_type existing,
3757 const compare_type &candidate);
3758 };
3759
3760 module_state::module_state (tree name, module_state *parent, bool partition)
3761 : imports (BITMAP_GGC_ALLOC ()), exports (BITMAP_GGC_ALLOC ()),
3762 parent (parent), name (name), slurp (NULL),
3763 flatname (NULL), filename (NULL),
3764 entity_lwm (~0u >> 1), entity_num (0),
3765 ordinary_locs (0, 0), macro_locs (0, 0),
3766 loc (UNKNOWN_LOCATION),
3767 crc (0), mod (MODULE_UNKNOWN), remap (0), subst (0)
3768 {
3769 loadedness = ML_NONE;
3770
3771 module_p = header_p = interface_p = partition_p = false;
3772
3773 directness = MD_NONE;
3774 exported_p = false;
3775
3776 cmi_noted_p = false;
3777 call_init_p = false;
3778
3779 partition_p = partition;
3780
3781 extensions = 0;
3782 if (name && TREE_CODE (name) == STRING_CST)
3783 {
3784 header_p = true;
3785
3786 const char *string = TREE_STRING_POINTER (name);
3787 gcc_checking_assert (string[0] == '.'
3788 ? IS_DIR_SEPARATOR (string[1])
3789 : IS_ABSOLUTE_PATH (string));
3790 }
3791
3792 gcc_checking_assert (!(parent && header_p));
3793 }
3794
3795 module_state::~module_state ()
3796 {
3797 release ();
3798 }
3799
3800 /* Hash module state. */
3801 static hashval_t
3802 module_name_hash (const_tree name)
3803 {
3804 if (TREE_CODE (name) == STRING_CST)
3805 return htab_hash_string (TREE_STRING_POINTER (name));
3806 else
3807 return IDENTIFIER_HASH_VALUE (name);
3808 }
3809
3810 hashval_t
3811 module_state_hash::hash (const value_type m)
3812 {
3813 hashval_t ph = pointer_hash<void>::hash
3814 (reinterpret_cast<void *> (reinterpret_cast<uintptr_t> (m->parent)
3815 | m->is_partition ()));
3816 hashval_t nh = module_name_hash (m->name);
3817 return iterative_hash_hashval_t (ph, nh);
3818 }
3819
3820 /* Hash a name. */
3821 hashval_t
3822 module_state_hash::hash (const compare_type &c)
3823 {
3824 hashval_t ph = pointer_hash<void>::hash (reinterpret_cast<void *> (c.second));
3825 hashval_t nh = module_name_hash (c.first);
3826
3827 return iterative_hash_hashval_t (ph, nh);
3828 }
3829
3830 bool
3831 module_state_hash::equal (const value_type existing,
3832 const compare_type &candidate)
3833 {
3834 uintptr_t ep = (reinterpret_cast<uintptr_t> (existing->parent)
3835 | existing->is_partition ());
3836 if (ep != candidate.second)
3837 return false;
3838
3839 /* Identifier comparison is by pointer. If the string_csts happen
3840 to be the same object, then they're equal too. */
3841 if (existing->name == candidate.first)
3842 return true;
3843
3844 /* If neither are string csts, they can't be equal. */
3845 if (TREE_CODE (candidate.first) != STRING_CST
3846 || TREE_CODE (existing->name) != STRING_CST)
3847 return false;
3848
3849 /* String equality. */
3850 if (TREE_STRING_LENGTH (existing->name)
3851 == TREE_STRING_LENGTH (candidate.first)
3852 && !memcmp (TREE_STRING_POINTER (existing->name),
3853 TREE_STRING_POINTER (candidate.first),
3854 TREE_STRING_LENGTH (existing->name)))
3855 return true;
3856
3857 return false;
3858 }
3859
3860 /********************************************************************/
3861 /* Global state */
3862
3863 /* Mapper name. */
3864 static const char *module_mapper_name;
3865
3866 /* CMI repository path and workspace. */
3867 static char *cmi_repo;
3868 static size_t cmi_repo_length;
3869 static char *cmi_path;
3870 static size_t cmi_path_alloc;
3871
3872 /* Count of available and loaded clusters. */
3873 static unsigned available_clusters;
3874 static unsigned loaded_clusters;
3875
3876 /* What the current TU is. */
3877 unsigned module_kind;
3878
3879 /* Number of global init calls needed. */
3880 unsigned num_init_calls_needed = 0;
3881
3882 /* Global trees. */
3883 static const std::pair<tree *, unsigned> global_tree_arys[] =
3884 {
3885 std::pair<tree *, unsigned> (sizetype_tab, stk_type_kind_last),
3886 std::pair<tree *, unsigned> (integer_types, itk_none),
3887 std::pair<tree *, unsigned> (global_trees, TI_MODULE_HWM),
3888 std::pair<tree *, unsigned> (c_global_trees, CTI_MODULE_HWM),
3889 std::pair<tree *, unsigned> (cp_global_trees, CPTI_MODULE_HWM),
3890 std::pair<tree *, unsigned> (NULL, 0)
3891 };
3892 static GTY(()) vec<tree, va_gc> *fixed_trees;
3893 static unsigned global_crc;
3894
3895 /* Lazy loading can open many files concurrently, there are
3896 per-process limits on that. We pay attention to the process limit,
3897 and attempt to increase it when we run out. Otherwise we use an
3898 LRU scheme to figure out who to flush. Note that if the import
3899 graph /depth/ exceeds lazy_limit, we'll exceed the limit. */
3900 static unsigned lazy_lru; /* LRU counter. */
3901 static unsigned lazy_open; /* Number of open modules */
3902 static unsigned lazy_limit; /* Current limit of open modules. */
3903 static unsigned lazy_hard_limit; /* Hard limit on open modules. */
3904 /* Account for source, assembler and dump files & directory searches.
3905 We don't keep the source file's open, so we don't have to account
3906 for #include depth. I think dump files are opened and closed per
3907 pass, but ICBW. */
3908 #define LAZY_HEADROOM 15 /* File descriptor headroom. */
3909
3910 /* Vector of module state. Indexed by OWNER. Has at least 2 slots. */
3911 static GTY(()) vec<module_state *, va_gc> *modules;
3912
3913 /* Hash of module state, findable by {name, parent}. */
3914 static GTY(()) hash_table<module_state_hash> *modules_hash;
3915
3916 /* Map of imported entities. We map DECL_UID to index of entity
3917 vector. */
3918 typedef hash_map<unsigned/*UID*/, unsigned/*index*/,
3919 simple_hashmap_traits<int_hash<unsigned,0>, unsigned>
3920 > entity_map_t;
3921 static entity_map_t *entity_map;
3922 /* Doesn't need GTYing, because any tree referenced here is also
3923 findable by, symbol table, specialization table, return type of
3924 reachable function. */
3925 static vec<binding_slot, va_heap, vl_embed> *entity_ary;
3926
3927 /* Members entities of imported classes that are defined in this TU.
3928 These are where the entity's context is not from the current TU.
3929 We need to emit the definition (but not the enclosing class).
3930
3931 We could find these by walking ALL the imported classes that we
3932 could provide a member definition. But that's expensive,
3933 especially when you consider lazy implicit member declarations,
3934 which could be ANY imported class. */
3935 static GTY(()) vec<tree, va_gc> *class_members;
3936
3937 /* The same problem exists for class template partial
3938 specializations. Now that we have constraints, the invariant of
3939 expecting them in the instantiation table no longer holds. One of
3940 the constrained partial specializations will be there, but the
3941 others not so much. It's not even an unconstrained partial
3942 spacialization in the table :( so any partial template declaration
3943 is added to this list too. */
3944 static GTY(()) vec<tree, va_gc> *partial_specializations;
3945
3946 /********************************************************************/
3947
3948 /* Our module mapper (created lazily). */
3949 module_client *mapper;
3950
3951 static module_client *make_mapper (location_t loc);
3952 inline module_client *get_mapper (location_t loc)
3953 {
3954 auto *res = mapper;
3955 if (!res)
3956 res = make_mapper (loc);
3957 return res;
3958 }
3959
3960 /********************************************************************/
3961 static tree
3962 get_clone_target (tree decl)
3963 {
3964 tree target;
3965
3966 if (TREE_CODE (decl) == TEMPLATE_DECL)
3967 {
3968 tree res_orig = DECL_CLONED_FUNCTION (DECL_TEMPLATE_RESULT (decl));
3969
3970 target = DECL_TI_TEMPLATE (res_orig);
3971 }
3972 else
3973 target = DECL_CLONED_FUNCTION (decl);
3974
3975 gcc_checking_assert (DECL_MAYBE_IN_CHARGE_CDTOR_P (target));
3976
3977 return target;
3978 }
3979
3980 /* Like FOR_EACH_CLONE, but will walk cloned templates. */
3981 #define FOR_EVERY_CLONE(CLONE, FN) \
3982 if (!DECL_MAYBE_IN_CHARGE_CDTOR_P (FN)); \
3983 else \
3984 for (CLONE = DECL_CHAIN (FN); \
3985 CLONE && DECL_CLONED_FUNCTION_P (CLONE); \
3986 CLONE = DECL_CHAIN (CLONE))
3987
3988 /* It'd be nice if USE_TEMPLATE was a field of template_info
3989 (a) it'd solve the enum case dealt with below,
3990 (b) both class templates and decl templates would store this in the
3991 same place
3992 (c) this function wouldn't need the by-ref arg, which is annoying. */
3993
3994 static tree
3995 node_template_info (tree decl, int &use)
3996 {
3997 tree ti = NULL_TREE;
3998 int use_tpl = -1;
3999 if (DECL_IMPLICIT_TYPEDEF_P (decl))
4000 {
4001 tree type = TREE_TYPE (decl);
4002
4003 ti = TYPE_TEMPLATE_INFO (type);
4004 if (ti)
4005 {
4006 if (TYPE_LANG_SPECIFIC (type))
4007 use_tpl = CLASSTYPE_USE_TEMPLATE (type);
4008 else
4009 {
4010 /* An enum, where we don't explicitly encode use_tpl.
4011 If the containing context (a type or a function), is
4012 an ({im,ex}plicit) instantiation, then this is too.
4013 If it's a partial or explicit specialization, then
4014 this is not!. */
4015 tree ctx = CP_DECL_CONTEXT (decl);
4016 if (TYPE_P (ctx))
4017 ctx = TYPE_NAME (ctx);
4018 node_template_info (ctx, use);
4019 use_tpl = use != 2 ? use : 0;
4020 }
4021 }
4022 }
4023 else if (DECL_LANG_SPECIFIC (decl)
4024 && (TREE_CODE (decl) == VAR_DECL
4025 || TREE_CODE (decl) == TYPE_DECL
4026 || TREE_CODE (decl) == FUNCTION_DECL
4027 || TREE_CODE (decl) == FIELD_DECL
4028 || TREE_CODE (decl) == TEMPLATE_DECL))
4029 {
4030 use_tpl = DECL_USE_TEMPLATE (decl);
4031 ti = DECL_TEMPLATE_INFO (decl);
4032 }
4033
4034 use = use_tpl;
4035 return ti;
4036 }
4037
4038 /* Find the index in entity_ary for an imported DECL. It should
4039 always be there, but bugs can cause it to be missing, and that can
4040 crash the crash reporting -- let's not do that! When streaming
4041 out we place entities from this module there too -- with negated
4042 indices. */
4043
4044 static unsigned
4045 import_entity_index (tree decl, bool null_ok = false)
4046 {
4047 if (unsigned *slot = entity_map->get (DECL_UID (decl)))
4048 return *slot;
4049
4050 gcc_checking_assert (null_ok);
4051 return ~(~0u >> 1);
4052 }
4053
4054 /* Find the module for an imported entity at INDEX in the entity ary.
4055 There must be one. */
4056
4057 static module_state *
4058 import_entity_module (unsigned index)
4059 {
4060 if (index > ~(~0u >> 1))
4061 /* This is an index for an exported entity. */
4062 return (*modules)[0];
4063
4064 unsigned pos = 1;
4065 unsigned len = modules->length () - pos;
4066 while (len)
4067 {
4068 unsigned half = len / 2;
4069 module_state *probe = (*modules)[pos + half];
4070 if (index < probe->entity_lwm)
4071 len = half;
4072 else if (index < probe->entity_lwm + probe->entity_num)
4073 return probe;
4074 else
4075 {
4076 pos += half + 1;
4077 len = len - (half + 1);
4078 }
4079 }
4080 gcc_unreachable ();
4081 }
4082
4083
4084 /********************************************************************/
4085 /* A dumping machinery. */
4086
4087 class dumper {
4088 public:
4089 enum {
4090 LOCATION = TDF_LINENO, /* -lineno:Source location streaming. */
4091 DEPEND = TDF_GRAPH, /* -graph:Dependency graph construction. */
4092 CLUSTER = TDF_BLOCKS, /* -blocks:Clusters. */
4093 TREE = TDF_UID, /* -uid:Tree streaming. */
4094 MERGE = TDF_ALIAS, /* -alias:Mergeable Entities. */
4095 ELF = TDF_ASMNAME, /* -asmname:Elf data. */
4096 MACRO = TDF_VOPS /* -vops:Macros. */
4097 };
4098
4099 private:
4100 struct impl {
4101 typedef vec<module_state *, va_heap, vl_embed> stack_t;
4102
4103 FILE *stream; /* Dump stream. */
4104 unsigned indent; /* Local indentation. */
4105 bool bol; /* Beginning of line. */
4106 stack_t stack; /* Trailing array of module_state. */
4107
4108 bool nested_name (tree); /* Dump a name following DECL_CONTEXT. */
4109 };
4110
4111 public:
4112 /* The dumper. */
4113 impl *dumps;
4114 dump_flags_t flags;
4115
4116 public:
4117 /* Push/pop module state dumping. */
4118 unsigned push (module_state *);
4119 void pop (unsigned);
4120
4121 public:
4122 /* Change local indentation. */
4123 void indent ()
4124 {
4125 if (dumps)
4126 dumps->indent++;
4127 }
4128 void outdent ()
4129 {
4130 if (dumps)
4131 {
4132 gcc_checking_assert (dumps->indent);
4133 dumps->indent--;
4134 }
4135 }
4136
4137 public:
4138 /* Is dump enabled?. */
4139 bool operator () (int mask = 0)
4140 {
4141 if (!dumps || !dumps->stream)
4142 return false;
4143 if (mask && !(mask & flags))
4144 return false;
4145 return true;
4146 }
4147 /* Dump some information. */
4148 bool operator () (const char *, ...);
4149 };
4150
4151 /* The dumper. */
4152 static dumper dump = {0, dump_flags_t (0)};
4153
4154 /* Push to dumping M. Return previous indentation level. */
4155
4156 unsigned
4157 dumper::push (module_state *m)
4158 {
4159 FILE *stream = NULL;
4160 if (!dumps || !dumps->stack.length ())
4161 {
4162 stream = dump_begin (module_dump_id, &flags);
4163 if (!stream)
4164 return 0;
4165 }
4166
4167 if (!dumps || !dumps->stack.space (1))
4168 {
4169 /* Create or extend the dump implementor. */
4170 unsigned current = dumps ? dumps->stack.length () : 0;
4171 unsigned count = current ? current * 2 : EXPERIMENT (1, 20);
4172 size_t alloc = (offsetof (impl, stack)
4173 + impl::stack_t::embedded_size (count));
4174 dumps = XRESIZEVAR (impl, dumps, alloc);
4175 dumps->stack.embedded_init (count, current);
4176 }
4177 if (stream)
4178 dumps->stream = stream;
4179
4180 unsigned n = dumps->indent;
4181 dumps->indent = 0;
4182 dumps->bol = true;
4183 dumps->stack.quick_push (m);
4184 if (m)
4185 {
4186 module_state *from = NULL;
4187
4188 if (dumps->stack.length () > 1)
4189 from = dumps->stack[dumps->stack.length () - 2];
4190 else
4191 dump ("");
4192 dump (from ? "Starting module %M (from %M)"
4193 : "Starting module %M", m, from);
4194 }
4195
4196 return n;
4197 }
4198
4199 /* Pop from dumping. Restore indentation to N. */
4200
4201 void dumper::pop (unsigned n)
4202 {
4203 if (!dumps)
4204 return;
4205
4206 gcc_checking_assert (dump () && !dumps->indent);
4207 if (module_state *m = dumps->stack[dumps->stack.length () - 1])
4208 {
4209 module_state *from = (dumps->stack.length () > 1
4210 ? dumps->stack[dumps->stack.length () - 2] : NULL);
4211 dump (from ? "Finishing module %M (returning to %M)"
4212 : "Finishing module %M", m, from);
4213 }
4214 dumps->stack.pop ();
4215 dumps->indent = n;
4216 if (!dumps->stack.length ())
4217 {
4218 dump_end (module_dump_id, dumps->stream);
4219 dumps->stream = NULL;
4220 }
4221 }
4222
4223 /* Dump a nested name for arbitrary tree T. Sometimes it won't have a
4224 name. */
4225
4226 bool
4227 dumper::impl::nested_name (tree t)
4228 {
4229 tree ti = NULL_TREE;
4230 int origin = -1;
4231 tree name = NULL_TREE;
4232
4233 if (t && TREE_CODE (t) == TREE_BINFO)
4234 t = BINFO_TYPE (t);
4235
4236 if (t && TYPE_P (t))
4237 t = TYPE_NAME (t);
4238
4239 if (t && DECL_P (t))
4240 {
4241 if (t == global_namespace || DECL_TEMPLATE_PARM_P (t))
4242 ;
4243 else if (tree ctx = DECL_CONTEXT (t))
4244 if (TREE_CODE (ctx) == TRANSLATION_UNIT_DECL
4245 || nested_name (ctx))
4246 fputs ("::", stream);
4247
4248 int use_tpl;
4249 ti = node_template_info (t, use_tpl);
4250 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
4251 && (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == t))
4252 t = TI_TEMPLATE (ti);
4253 if (TREE_CODE (t) == TEMPLATE_DECL)
4254 fputs ("template ", stream);
4255
4256 if (DECL_LANG_SPECIFIC (t) && DECL_MODULE_IMPORT_P (t))
4257 {
4258 /* We need to be careful here, so as to not explode on
4259 inconsistent data -- we're probably debugging, because
4260 Something Is Wrong. */
4261 unsigned index = import_entity_index (t, true);
4262 if (!(index & ~(~0u >> 1)))
4263 origin = import_entity_module (index)->mod;
4264 else if (index > ~(~0u >> 1))
4265 /* An imported partition member that we're emitting. */
4266 origin = 0;
4267 else
4268 origin = -2;
4269 }
4270
4271 name = DECL_NAME (t) ? DECL_NAME (t)
4272 : HAS_DECL_ASSEMBLER_NAME_P (t) ? DECL_ASSEMBLER_NAME_RAW (t)
4273 : NULL_TREE;
4274 }
4275 else
4276 name = t;
4277
4278 if (name)
4279 switch (TREE_CODE (name))
4280 {
4281 default:
4282 fputs ("#unnamed#", stream);
4283 break;
4284
4285 case IDENTIFIER_NODE:
4286 fwrite (IDENTIFIER_POINTER (name), 1, IDENTIFIER_LENGTH (name), stream);
4287 break;
4288
4289 case INTEGER_CST:
4290 print_hex (wi::to_wide (name), stream);
4291 break;
4292
4293 case STRING_CST:
4294 /* If TREE_TYPE is NULL, this is a raw string. */
4295 fwrite (TREE_STRING_POINTER (name), 1,
4296 TREE_STRING_LENGTH (name) - (TREE_TYPE (name) != NULL_TREE),
4297 stream);
4298 break;
4299 }
4300 else
4301 fputs ("#null#", stream);
4302
4303 if (origin >= 0)
4304 {
4305 const module_state *module = (*modules)[origin];
4306 fprintf (stream, "@%s:%d", !module ? "" : !module->name ? "(unnamed)"
4307 : module->get_flatname (), origin);
4308 }
4309 else if (origin == -2)
4310 fprintf (stream, "@???");
4311
4312 if (ti)
4313 {
4314 tree args = INNERMOST_TEMPLATE_ARGS (TI_ARGS (ti));
4315 fputs ("<", stream);
4316 if (args)
4317 for (int ix = 0; ix != TREE_VEC_LENGTH (args); ix++)
4318 {
4319 if (ix)
4320 fputs (",", stream);
4321 nested_name (TREE_VEC_ELT (args, ix));
4322 }
4323 fputs (">", stream);
4324 }
4325
4326 return true;
4327 }
4328
4329 /* Formatted dumping. FORMAT begins with '+' do not emit a trailing
4330 new line. (Normally it is appended.)
4331 Escapes:
4332 %C - tree_code
4333 %I - identifier
4334 %M - module_state
4335 %N - name -- DECL_NAME
4336 %P - context:name pair
4337 %R - unsigned:unsigned ratio
4338 %S - symbol -- DECL_ASSEMBLER_NAME
4339 %U - long unsigned
4340 %V - version
4341 --- the following are printf-like, but without its flexibility
4342 %d - decimal int
4343 %p - pointer
4344 %s - string
4345 %u - unsigned int
4346 %x - hex int
4347
4348 We do not implement the printf modifiers. */
4349
4350 bool
4351 dumper::operator () (const char *format, ...)
4352 {
4353 if (!(*this) ())
4354 return false;
4355
4356 bool no_nl = format[0] == '+';
4357 format += no_nl;
4358
4359 if (dumps->bol)
4360 {
4361 /* Module import indent. */
4362 if (unsigned depth = dumps->stack.length () - 1)
4363 {
4364 const char *prefix = ">>>>";
4365 fprintf (dumps->stream, (depth <= strlen (prefix)
4366 ? &prefix[strlen (prefix) - depth]
4367 : ">.%d.>"), depth);
4368 }
4369
4370 /* Local indent. */
4371 if (unsigned indent = dumps->indent)
4372 {
4373 const char *prefix = " ";
4374 fprintf (dumps->stream, (indent <= strlen (prefix)
4375 ? &prefix[strlen (prefix) - indent]
4376 : " .%d. "), indent);
4377 }
4378 dumps->bol = false;
4379 }
4380
4381 va_list args;
4382 va_start (args, format);
4383 while (const char *esc = strchr (format, '%'))
4384 {
4385 fwrite (format, 1, (size_t)(esc - format), dumps->stream);
4386 format = ++esc;
4387 switch (*format++)
4388 {
4389 default:
4390 gcc_unreachable ();
4391
4392 case '%':
4393 fputc ('%', dumps->stream);
4394 break;
4395
4396 case 'C': /* Code */
4397 {
4398 tree_code code = (tree_code)va_arg (args, unsigned);
4399 fputs (get_tree_code_name (code), dumps->stream);
4400 }
4401 break;
4402
4403 case 'I': /* Identifier. */
4404 {
4405 tree t = va_arg (args, tree);
4406 dumps->nested_name (t);
4407 }
4408 break;
4409
4410 case 'M': /* Module. */
4411 {
4412 const char *str = "(none)";
4413 if (module_state *m = va_arg (args, module_state *))
4414 {
4415 if (!m->is_rooted ())
4416 str = "(detached)";
4417 else
4418 str = m->get_flatname ();
4419 }
4420 fputs (str, dumps->stream);
4421 }
4422 break;
4423
4424 case 'N': /* Name. */
4425 {
4426 tree t = va_arg (args, tree);
4427 if (t && TREE_CODE (t) == OVERLOAD)
4428 t = OVL_FIRST (t);
4429 fputc ('\'', dumps->stream);
4430 dumps->nested_name (t);
4431 fputc ('\'', dumps->stream);
4432 }
4433 break;
4434
4435 case 'P': /* Pair. */
4436 {
4437 tree ctx = va_arg (args, tree);
4438 tree name = va_arg (args, tree);
4439 fputc ('\'', dumps->stream);
4440 dumps->nested_name (ctx);
4441 if (ctx && ctx != global_namespace)
4442 fputs ("::", dumps->stream);
4443 dumps->nested_name (name);
4444 fputc ('\'', dumps->stream);
4445 }
4446 break;
4447
4448 case 'R': /* Ratio */
4449 {
4450 unsigned a = va_arg (args, unsigned);
4451 unsigned b = va_arg (args, unsigned);
4452 fprintf (dumps->stream, "%.1f", (float) a / (b + !b));
4453 }
4454 break;
4455
4456 case 'S': /* Symbol name */
4457 {
4458 tree t = va_arg (args, tree);
4459 if (t && TYPE_P (t))
4460 t = TYPE_NAME (t);
4461 if (t && HAS_DECL_ASSEMBLER_NAME_P (t)
4462 && DECL_ASSEMBLER_NAME_SET_P (t))
4463 {
4464 fputc ('(', dumps->stream);
4465 fputs (IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME (t)),
4466 dumps->stream);
4467 fputc (')', dumps->stream);
4468 }
4469 }
4470 break;
4471
4472 case 'U': /* long unsigned. */
4473 {
4474 unsigned long u = va_arg (args, unsigned long);
4475 fprintf (dumps->stream, "%lu", u);
4476 }
4477 break;
4478
4479 case 'V': /* Verson. */
4480 {
4481 unsigned v = va_arg (args, unsigned);
4482 verstr_t string;
4483
4484 version2string (v, string);
4485 fputs (string, dumps->stream);
4486 }
4487 break;
4488
4489 case 'c': /* Character. */
4490 {
4491 int c = va_arg (args, int);
4492 fputc (c, dumps->stream);
4493 }
4494 break;
4495
4496 case 'd': /* Decimal Int. */
4497 {
4498 int d = va_arg (args, int);
4499 fprintf (dumps->stream, "%d", d);
4500 }
4501 break;
4502
4503 case 'p': /* Pointer. */
4504 {
4505 void *p = va_arg (args, void *);
4506 fprintf (dumps->stream, "%p", p);
4507 }
4508 break;
4509
4510 case 's': /* String. */
4511 {
4512 const char *s = va_arg (args, char *);
4513 gcc_checking_assert (s);
4514 fputs (s, dumps->stream);
4515 }
4516 break;
4517
4518 case 'u': /* Unsigned. */
4519 {
4520 unsigned u = va_arg (args, unsigned);
4521 fprintf (dumps->stream, "%u", u);
4522 }
4523 break;
4524
4525 case 'x': /* Hex. */
4526 {
4527 unsigned x = va_arg (args, unsigned);
4528 fprintf (dumps->stream, "%x", x);
4529 }
4530 break;
4531 }
4532 }
4533 fputs (format, dumps->stream);
4534 va_end (args);
4535 if (!no_nl)
4536 {
4537 dumps->bol = true;
4538 fputc ('\n', dumps->stream);
4539 }
4540 return true;
4541 }
4542
4543 struct note_def_cache_hasher : ggc_cache_ptr_hash<tree_node>
4544 {
4545 static int keep_cache_entry (tree t)
4546 {
4547 if (!CHECKING_P)
4548 /* GTY is unfortunately not clever enough to conditionalize
4549 this. */
4550 gcc_unreachable ();
4551
4552 if (ggc_marked_p (t))
4553 return -1;
4554
4555 unsigned n = dump.push (NULL);
4556 /* This might or might not be an error. We should note its
4557 dropping whichever. */
4558 dump () && dump ("Dropping %N from note_defs table", t);
4559 dump.pop (n);
4560
4561 return 0;
4562 }
4563 };
4564
4565 /* We should stream each definition at most once.
4566 This needs to be a cache because there are cases where a definition
4567 ends up being not retained, and we need to drop those so we don't
4568 get confused if memory is reallocated. */
4569 typedef hash_table<note_def_cache_hasher> note_defs_table_t;
4570 static GTY((cache)) note_defs_table_t *note_defs;
4571
4572 void
4573 trees_in::assert_definition (tree decl ATTRIBUTE_UNUSED,
4574 bool installing ATTRIBUTE_UNUSED)
4575 {
4576 #if CHECKING_P
4577 tree *slot = note_defs->find_slot (decl, installing ? INSERT : NO_INSERT);
4578 if (installing)
4579 {
4580 /* We must be inserting for the first time. */
4581 gcc_assert (!*slot);
4582 *slot = decl;
4583 }
4584 else
4585 /* If this is not the mergeable entity, it should not be in the
4586 table. If it is a non-global-module mergeable entity, it
4587 should be in the table. Global module entities could have been
4588 defined textually in the current TU and so might or might not
4589 be present. */
4590 gcc_assert (!is_duplicate (decl)
4591 ? !slot
4592 : (slot
4593 || !DECL_LANG_SPECIFIC (decl)
4594 || !DECL_MODULE_PURVIEW_P (decl)
4595 || (!DECL_MODULE_IMPORT_P (decl)
4596 && header_module_p ())));
4597
4598 if (TREE_CODE (decl) == TEMPLATE_DECL)
4599 gcc_assert (!note_defs->find_slot (DECL_TEMPLATE_RESULT (decl), NO_INSERT));
4600 #endif
4601 }
4602
4603 void
4604 trees_out::assert_definition (tree decl ATTRIBUTE_UNUSED)
4605 {
4606 #if CHECKING_P
4607 tree *slot = note_defs->find_slot (decl, INSERT);
4608 gcc_assert (!*slot);
4609 *slot = decl;
4610 if (TREE_CODE (decl) == TEMPLATE_DECL)
4611 gcc_assert (!note_defs->find_slot (DECL_TEMPLATE_RESULT (decl), NO_INSERT));
4612 #endif
4613 }
4614
4615 /********************************************************************/
4616 static bool
4617 noisy_p ()
4618 {
4619 if (quiet_flag)
4620 return false;
4621
4622 pp_needs_newline (global_dc->printer) = true;
4623 diagnostic_set_last_function (global_dc, (diagnostic_info *) NULL);
4624
4625 return true;
4626 }
4627
4628 /* Set the cmi repo. Strip trailing '/', '.' becomes NULL. */
4629
4630 static void
4631 set_cmi_repo (const char *r)
4632 {
4633 XDELETEVEC (cmi_repo);
4634 XDELETEVEC (cmi_path);
4635 cmi_path_alloc = 0;
4636
4637 cmi_repo = NULL;
4638 cmi_repo_length = 0;
4639
4640 if (!r || !r[0])
4641 return;
4642
4643 size_t len = strlen (r);
4644 cmi_repo = XNEWVEC (char, len + 1);
4645 memcpy (cmi_repo, r, len + 1);
4646
4647 if (len > 1 && IS_DIR_SEPARATOR (cmi_repo[len-1]))
4648 len--;
4649 if (len == 1 && cmi_repo[0] == '.')
4650 len--;
4651 cmi_repo[len] = 0;
4652 cmi_repo_length = len;
4653 }
4654
4655 /* TO is a repo-relative name. Provide one that we may use from where
4656 we are. */
4657
4658 static const char *
4659 maybe_add_cmi_prefix (const char *to, size_t *len_p = NULL)
4660 {
4661 size_t len = len_p || cmi_repo_length ? strlen (to) : 0;
4662
4663 if (cmi_repo_length && !IS_ABSOLUTE_PATH (to))
4664 {
4665 if (cmi_path_alloc < cmi_repo_length + len + 2)
4666 {
4667 XDELETEVEC (cmi_path);
4668 cmi_path_alloc = cmi_repo_length + len * 2 + 2;
4669 cmi_path = XNEWVEC (char, cmi_path_alloc);
4670
4671 memcpy (cmi_path, cmi_repo, cmi_repo_length);
4672 cmi_path[cmi_repo_length] = DIR_SEPARATOR;
4673 }
4674
4675 memcpy (&cmi_path[cmi_repo_length + 1], to, len + 1);
4676 len += cmi_repo_length + 1;
4677 to = cmi_path;
4678 }
4679
4680 if (len_p)
4681 *len_p = len;
4682
4683 return to;
4684 }
4685
4686 /* Try and create the directories of PATH. */
4687
4688 static void
4689 create_dirs (char *path)
4690 {
4691 /* Try and create the missing directories. */
4692 for (char *base = path; *base; base++)
4693 if (IS_DIR_SEPARATOR (*base))
4694 {
4695 char sep = *base;
4696 *base = 0;
4697 int failed = mkdir (path, S_IRWXU | S_IRWXG | S_IRWXO);
4698 dump () && dump ("Mkdir ('%s') errno:=%u", path, failed ? errno : 0);
4699 *base = sep;
4700 if (failed
4701 /* Maybe racing with another creator (of a *different*
4702 module). */
4703 && errno != EEXIST)
4704 break;
4705 }
4706 }
4707
4708 /* Given a CLASSTYPE_DECL_LIST VALUE get the the template friend decl,
4709 if that's what this is. */
4710
4711 static tree
4712 friend_from_decl_list (tree frnd)
4713 {
4714 tree res = frnd;
4715
4716 if (TREE_CODE (frnd) != TEMPLATE_DECL)
4717 {
4718 tree tmpl = NULL_TREE;
4719 if (TYPE_P (frnd))
4720 {
4721 res = TYPE_NAME (frnd);
4722 if (CLASSTYPE_TEMPLATE_INFO (frnd))
4723 tmpl = CLASSTYPE_TI_TEMPLATE (frnd);
4724 }
4725 else if (DECL_TEMPLATE_INFO (frnd))
4726 {
4727 tmpl = DECL_TI_TEMPLATE (frnd);
4728 if (TREE_CODE (tmpl) != TEMPLATE_DECL)
4729 tmpl = NULL_TREE;
4730 }
4731
4732 if (tmpl && DECL_TEMPLATE_RESULT (tmpl) == res)
4733 res = tmpl;
4734 }
4735
4736 return res;
4737 }
4738
4739 static tree
4740 find_enum_member (tree ctx, tree name)
4741 {
4742 for (tree values = TYPE_VALUES (ctx);
4743 values; values = TREE_CHAIN (values))
4744 if (DECL_NAME (TREE_VALUE (values)) == name)
4745 return TREE_VALUE (values);
4746
4747 return NULL_TREE;
4748 }
4749
4750 /********************************************************************/
4751 /* Instrumentation gathered writing bytes. */
4752
4753 void
4754 bytes_out::instrument ()
4755 {
4756 dump ("Wrote %u bytes in %u blocks", lengths[3], spans[3]);
4757 dump ("Wrote %u bits in %u bytes", lengths[0] + lengths[1], lengths[2]);
4758 for (unsigned ix = 0; ix < 2; ix++)
4759 dump (" %u %s spans of %R bits", spans[ix],
4760 ix ? "one" : "zero", lengths[ix], spans[ix]);
4761 dump (" %u blocks with %R bits padding", spans[2],
4762 lengths[2] * 8 - (lengths[0] + lengths[1]), spans[2]);
4763 }
4764
4765 /* Instrumentation gathered writing trees. */
4766 void
4767 trees_out::instrument ()
4768 {
4769 if (dump (""))
4770 {
4771 bytes_out::instrument ();
4772 dump ("Wrote:");
4773 dump (" %u decl trees", decl_val_count);
4774 dump (" %u other trees", tree_val_count);
4775 dump (" %u back references", back_ref_count);
4776 dump (" %u null trees", null_count);
4777 }
4778 }
4779
4780 /* Setup and teardown for a tree walk. */
4781
4782 void
4783 trees_out::begin ()
4784 {
4785 gcc_assert (!streaming_p () || !tree_map.elements ());
4786
4787 mark_trees ();
4788 if (streaming_p ())
4789 parent::begin ();
4790 }
4791
4792 unsigned
4793 trees_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
4794 {
4795 gcc_checking_assert (streaming_p ());
4796
4797 unmark_trees ();
4798 return parent::end (sink, name, crc_ptr);
4799 }
4800
4801 void
4802 trees_out::end ()
4803 {
4804 gcc_assert (!streaming_p ());
4805
4806 unmark_trees ();
4807 /* Do not parent::end -- we weren't streaming. */
4808 }
4809
4810 void
4811 trees_out::mark_trees ()
4812 {
4813 if (size_t size = tree_map.elements ())
4814 {
4815 /* This isn't our first rodeo, destroy and recreate the
4816 tree_map. I'm a bad bad man. Use the previous size as a
4817 guess for the next one (so not all bad). */
4818 tree_map.~ptr_int_hash_map ();
4819 new (&tree_map) ptr_int_hash_map (size);
4820 }
4821
4822 /* Install the fixed trees, with +ve references. */
4823 unsigned limit = fixed_trees->length ();
4824 for (unsigned ix = 0; ix != limit; ix++)
4825 {
4826 tree val = (*fixed_trees)[ix];
4827 bool existed = tree_map.put (val, ix + tag_fixed);
4828 gcc_checking_assert (!TREE_VISITED (val) && !existed);
4829 TREE_VISITED (val) = true;
4830 }
4831
4832 ref_num = 0;
4833 }
4834
4835 /* Unmark the trees we encountered */
4836
4837 void
4838 trees_out::unmark_trees ()
4839 {
4840 ptr_int_hash_map::iterator end (tree_map.end ());
4841 for (ptr_int_hash_map::iterator iter (tree_map.begin ()); iter != end; ++iter)
4842 {
4843 tree node = reinterpret_cast<tree> ((*iter).first);
4844 int ref = (*iter).second;
4845 /* We should have visited the node, and converted its mergeable
4846 reference to a regular reference. */
4847 gcc_checking_assert (TREE_VISITED (node)
4848 && (ref <= tag_backref || ref >= tag_fixed));
4849 TREE_VISITED (node) = false;
4850 }
4851 }
4852
4853 /* Mark DECL for by-value walking. We do this by inserting it into
4854 the tree map with a reference of zero. May be called multiple
4855 times on the same node. */
4856
4857 void
4858 trees_out::mark_by_value (tree decl)
4859 {
4860 gcc_checking_assert (DECL_P (decl)
4861 /* Enum consts are INTEGER_CSTS. */
4862 || TREE_CODE (decl) == INTEGER_CST
4863 || TREE_CODE (decl) == TREE_BINFO);
4864
4865 if (TREE_VISITED (decl))
4866 /* Must already be forced or fixed. */
4867 gcc_checking_assert (*tree_map.get (decl) >= tag_value);
4868 else
4869 {
4870 bool existed = tree_map.put (decl, tag_value);
4871 gcc_checking_assert (!existed);
4872 TREE_VISITED (decl) = true;
4873 }
4874 }
4875
4876 int
4877 trees_out::get_tag (tree t)
4878 {
4879 gcc_checking_assert (TREE_VISITED (t));
4880 return *tree_map.get (t);
4881 }
4882
4883 /* Insert T into the map, return its tag number. */
4884
4885 int
4886 trees_out::insert (tree t, walk_kind walk)
4887 {
4888 gcc_checking_assert (walk != WK_normal || !TREE_VISITED (t));
4889 int tag = --ref_num;
4890 bool existed;
4891 int &slot = tree_map.get_or_insert (t, &existed);
4892 gcc_checking_assert (TREE_VISITED (t) == existed
4893 && (!existed
4894 || (walk == WK_value && slot == tag_value)));
4895 TREE_VISITED (t) = true;
4896 slot = tag;
4897
4898 return tag;
4899 }
4900
4901 /* Insert T into the backreference array. Return its back reference
4902 number. */
4903
4904 int
4905 trees_in::insert (tree t)
4906 {
4907 gcc_checking_assert (t || get_overrun ());
4908 back_refs.safe_push (t);
4909 return -(int)back_refs.length ();
4910 }
4911
4912 /* A chained set of decls. */
4913
4914 void
4915 trees_out::chained_decls (tree decls)
4916 {
4917 for (; decls; decls = DECL_CHAIN (decls))
4918 {
4919 if (VAR_OR_FUNCTION_DECL_P (decls)
4920 && DECL_LOCAL_DECL_P (decls))
4921 {
4922 /* Make sure this is the first encounter, and mark for
4923 walk-by-value. */
4924 gcc_checking_assert (!TREE_VISITED (decls)
4925 && !DECL_TEMPLATE_INFO (decls));
4926 mark_by_value (decls);
4927 }
4928 tree_node (decls);
4929 }
4930 tree_node (NULL_TREE);
4931 }
4932
4933 tree
4934 trees_in::chained_decls ()
4935 {
4936 tree decls = NULL_TREE;
4937 for (tree *chain = &decls;;)
4938 if (tree decl = tree_node ())
4939 {
4940 if (!DECL_P (decl) || DECL_CHAIN (decl))
4941 {
4942 set_overrun ();
4943 break;
4944 }
4945 *chain = decl;
4946 chain = &DECL_CHAIN (decl);
4947 }
4948 else
4949 break;
4950
4951 return decls;
4952 }
4953
4954 /* A vector of decls following DECL_CHAIN. */
4955
4956 void
4957 trees_out::vec_chained_decls (tree decls)
4958 {
4959 if (streaming_p ())
4960 {
4961 unsigned len = 0;
4962
4963 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
4964 len++;
4965 u (len);
4966 }
4967
4968 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
4969 {
4970 if (DECL_IMPLICIT_TYPEDEF_P (decl)
4971 && TYPE_NAME (TREE_TYPE (decl)) != decl)
4972 /* An anonynmous struct with a typedef name. An odd thing to
4973 write. */
4974 tree_node (NULL_TREE);
4975 else
4976 tree_node (decl);
4977 }
4978 }
4979
4980 vec<tree, va_heap> *
4981 trees_in::vec_chained_decls ()
4982 {
4983 vec<tree, va_heap> *v = NULL;
4984
4985 if (unsigned len = u ())
4986 {
4987 vec_alloc (v, len);
4988
4989 for (unsigned ix = 0; ix < len; ix++)
4990 {
4991 tree decl = tree_node ();
4992 if (decl && !DECL_P (decl))
4993 {
4994 set_overrun ();
4995 break;
4996 }
4997 v->quick_push (decl);
4998 }
4999
5000 if (get_overrun ())
5001 {
5002 vec_free (v);
5003 v = NULL;
5004 }
5005 }
5006
5007 return v;
5008 }
5009
5010 /* A vector of trees. */
5011
5012 void
5013 trees_out::tree_vec (vec<tree, va_gc> *v)
5014 {
5015 unsigned len = vec_safe_length (v);
5016 if (streaming_p ())
5017 u (len);
5018 for (unsigned ix = 0; ix != len; ix++)
5019 tree_node ((*v)[ix]);
5020 }
5021
5022 vec<tree, va_gc> *
5023 trees_in::tree_vec ()
5024 {
5025 vec<tree, va_gc> *v = NULL;
5026 if (unsigned len = u ())
5027 {
5028 vec_alloc (v, len);
5029 for (unsigned ix = 0; ix != len; ix++)
5030 v->quick_push (tree_node ());
5031 }
5032 return v;
5033 }
5034
5035 /* A vector of tree pairs. */
5036
5037 void
5038 trees_out::tree_pair_vec (vec<tree_pair_s, va_gc> *v)
5039 {
5040 unsigned len = vec_safe_length (v);
5041 if (streaming_p ())
5042 u (len);
5043 if (len)
5044 for (unsigned ix = 0; ix != len; ix++)
5045 {
5046 tree_pair_s const &s = (*v)[ix];
5047 tree_node (s.purpose);
5048 tree_node (s.value);
5049 }
5050 }
5051
5052 vec<tree_pair_s, va_gc> *
5053 trees_in::tree_pair_vec ()
5054 {
5055 vec<tree_pair_s, va_gc> *v = NULL;
5056 if (unsigned len = u ())
5057 {
5058 vec_alloc (v, len);
5059 for (unsigned ix = 0; ix != len; ix++)
5060 {
5061 tree_pair_s s;
5062 s.purpose = tree_node ();
5063 s.value = tree_node ();
5064 v->quick_push (s);
5065 }
5066 }
5067 return v;
5068 }
5069
5070 void
5071 trees_out::tree_list (tree list, bool has_purpose)
5072 {
5073 for (; list; list = TREE_CHAIN (list))
5074 {
5075 gcc_checking_assert (TREE_VALUE (list));
5076 tree_node (TREE_VALUE (list));
5077 if (has_purpose)
5078 tree_node (TREE_PURPOSE (list));
5079 }
5080 tree_node (NULL_TREE);
5081 }
5082
5083 tree
5084 trees_in::tree_list (bool has_purpose)
5085 {
5086 tree res = NULL_TREE;
5087
5088 for (tree *chain = &res; tree value = tree_node ();
5089 chain = &TREE_CHAIN (*chain))
5090 {
5091 tree purpose = has_purpose ? tree_node () : NULL_TREE;
5092 *chain = build_tree_list (purpose, value);
5093 }
5094
5095 return res;
5096 }
5097 /* Start tree write. Write information to allocate the receiving
5098 node. */
5099
5100 void
5101 trees_out::start (tree t, bool code_streamed)
5102 {
5103 if (TYPE_P (t))
5104 {
5105 enum tree_code code = TREE_CODE (t);
5106 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
5107 /* All these types are TYPE_NON_COMMON. */
5108 gcc_checking_assert (code == RECORD_TYPE
5109 || code == UNION_TYPE
5110 || code == ENUMERAL_TYPE
5111 || code == TEMPLATE_TYPE_PARM
5112 || code == TEMPLATE_TEMPLATE_PARM
5113 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
5114 }
5115
5116 if (!code_streamed)
5117 u (TREE_CODE (t));
5118
5119 switch (TREE_CODE (t))
5120 {
5121 default:
5122 if (TREE_CODE_CLASS (TREE_CODE (t)) == tcc_vl_exp)
5123 u (VL_EXP_OPERAND_LENGTH (t));
5124 break;
5125
5126 case INTEGER_CST:
5127 u (TREE_INT_CST_NUNITS (t));
5128 u (TREE_INT_CST_EXT_NUNITS (t));
5129 u (TREE_INT_CST_OFFSET_NUNITS (t));
5130 break;
5131
5132 case OMP_CLAUSE:
5133 state->extensions |= SE_OPENMP;
5134 u (OMP_CLAUSE_CODE (t));
5135 break;
5136
5137 case STRING_CST:
5138 str (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t));
5139 break;
5140
5141 case VECTOR_CST:
5142 u (VECTOR_CST_LOG2_NPATTERNS (t));
5143 u (VECTOR_CST_NELTS_PER_PATTERN (t));
5144 break;
5145
5146 case TREE_BINFO:
5147 u (BINFO_N_BASE_BINFOS (t));
5148 break;
5149
5150 case TREE_VEC:
5151 u (TREE_VEC_LENGTH (t));
5152 break;
5153
5154 case FIXED_CST:
5155 case POLY_INT_CST:
5156 gcc_unreachable (); /* Not supported in C++. */
5157 break;
5158
5159 case IDENTIFIER_NODE:
5160 case SSA_NAME:
5161 case TARGET_MEM_REF:
5162 case TRANSLATION_UNIT_DECL:
5163 /* We shouldn't meet these. */
5164 gcc_unreachable ();
5165 break;
5166 }
5167 }
5168
5169 /* Start tree read. Allocate the receiving node. */
5170
5171 tree
5172 trees_in::start (unsigned code)
5173 {
5174 tree t = NULL_TREE;
5175
5176 if (!code)
5177 code = u ();
5178
5179 switch (code)
5180 {
5181 default:
5182 if (code >= MAX_TREE_CODES)
5183 {
5184 fail:
5185 set_overrun ();
5186 return NULL_TREE;
5187 }
5188 else if (TREE_CODE_CLASS (code) == tcc_vl_exp)
5189 {
5190 unsigned ops = u ();
5191 t = build_vl_exp (tree_code (code), ops);
5192 }
5193 else
5194 t = make_node (tree_code (code));
5195 break;
5196
5197 case INTEGER_CST:
5198 {
5199 unsigned n = u ();
5200 unsigned e = u ();
5201 t = make_int_cst (n, e);
5202 TREE_INT_CST_OFFSET_NUNITS(t) = u ();
5203 }
5204 break;
5205
5206 case OMP_CLAUSE:
5207 {
5208 if (!(state->extensions & SE_OPENMP))
5209 goto fail;
5210
5211 unsigned omp_code = u ();
5212 t = build_omp_clause (UNKNOWN_LOCATION, omp_clause_code (omp_code));
5213 }
5214 break;
5215
5216 case STRING_CST:
5217 {
5218 size_t l;
5219 const char *chars = str (&l);
5220 t = build_string (l, chars);
5221 }
5222 break;
5223
5224 case VECTOR_CST:
5225 {
5226 unsigned log2_npats = u ();
5227 unsigned elts_per = u ();
5228 t = make_vector (log2_npats, elts_per);
5229 }
5230 break;
5231
5232 case TREE_BINFO:
5233 t = make_tree_binfo (u ());
5234 break;
5235
5236 case TREE_VEC:
5237 t = make_tree_vec (u ());
5238 break;
5239
5240 case FIXED_CST:
5241 case IDENTIFIER_NODE:
5242 case POLY_INT_CST:
5243 case SSA_NAME:
5244 case TARGET_MEM_REF:
5245 case TRANSLATION_UNIT_DECL:
5246 goto fail;
5247 }
5248
5249 return t;
5250 }
5251
5252 /* The structure streamers access the raw fields, because the
5253 alternative, of using the accessor macros can require using
5254 different accessors for the same underlying field, depending on the
5255 tree code. That's both confusing and annoying. */
5256
5257 /* Read & write the core boolean flags. */
5258
5259 void
5260 trees_out::core_bools (tree t)
5261 {
5262 #define WB(X) (b (X))
5263 tree_code code = TREE_CODE (t);
5264
5265 WB (t->base.side_effects_flag);
5266 WB (t->base.constant_flag);
5267 WB (t->base.addressable_flag);
5268 WB (t->base.volatile_flag);
5269 WB (t->base.readonly_flag);
5270 /* base.asm_written_flag is a property of the current TU's use of
5271 this decl. */
5272 WB (t->base.nowarning_flag);
5273 /* base.visited read as zero (it's set for writer, because that's
5274 how we mark nodes). */
5275 /* base.used_flag is not streamed. Readers may set TREE_USED of
5276 decls they use. */
5277 WB (t->base.nothrow_flag);
5278 WB (t->base.static_flag);
5279 if (TREE_CODE_CLASS (code) != tcc_type)
5280 /* This is TYPE_CACHED_VALUES_P for types. */
5281 WB (t->base.public_flag);
5282 WB (t->base.private_flag);
5283 WB (t->base.protected_flag);
5284 WB (t->base.deprecated_flag);
5285 WB (t->base.default_def_flag);
5286
5287 switch (code)
5288 {
5289 case CALL_EXPR:
5290 case INTEGER_CST:
5291 case SSA_NAME:
5292 case TARGET_MEM_REF:
5293 case TREE_VEC:
5294 /* These use different base.u fields. */
5295 break;
5296
5297 default:
5298 WB (t->base.u.bits.lang_flag_0);
5299 bool flag_1 = t->base.u.bits.lang_flag_1;
5300 if (!flag_1)
5301 ;
5302 else if (code == TEMPLATE_INFO)
5303 /* This is TI_PENDING_TEMPLATE_FLAG, not relevant to reader. */
5304 flag_1 = false;
5305 else if (code == VAR_DECL)
5306 {
5307 /* This is DECL_INITIALIZED_P. */
5308 if (DECL_CONTEXT (t)
5309 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
5310 /* We'll set this when reading the definition. */
5311 flag_1 = false;
5312 }
5313 WB (flag_1);
5314 WB (t->base.u.bits.lang_flag_2);
5315 WB (t->base.u.bits.lang_flag_3);
5316 WB (t->base.u.bits.lang_flag_4);
5317 WB (t->base.u.bits.lang_flag_5);
5318 WB (t->base.u.bits.lang_flag_6);
5319 WB (t->base.u.bits.saturating_flag);
5320 WB (t->base.u.bits.unsigned_flag);
5321 WB (t->base.u.bits.packed_flag);
5322 WB (t->base.u.bits.user_align);
5323 WB (t->base.u.bits.nameless_flag);
5324 WB (t->base.u.bits.atomic_flag);
5325 break;
5326 }
5327
5328 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
5329 {
5330 WB (t->type_common.no_force_blk_flag);
5331 WB (t->type_common.needs_constructing_flag);
5332 WB (t->type_common.transparent_aggr_flag);
5333 WB (t->type_common.restrict_flag);
5334 WB (t->type_common.string_flag);
5335 WB (t->type_common.lang_flag_0);
5336 WB (t->type_common.lang_flag_1);
5337 WB (t->type_common.lang_flag_2);
5338 WB (t->type_common.lang_flag_3);
5339 WB (t->type_common.lang_flag_4);
5340 WB (t->type_common.lang_flag_5);
5341 WB (t->type_common.lang_flag_6);
5342 WB (t->type_common.typeless_storage);
5343 }
5344
5345 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5346 {
5347 WB (t->decl_common.nonlocal_flag);
5348 WB (t->decl_common.virtual_flag);
5349 WB (t->decl_common.ignored_flag);
5350 WB (t->decl_common.abstract_flag);
5351 WB (t->decl_common.artificial_flag);
5352 WB (t->decl_common.preserve_flag);
5353 WB (t->decl_common.debug_expr_is_from);
5354 WB (t->decl_common.lang_flag_0);
5355 WB (t->decl_common.lang_flag_1);
5356 WB (t->decl_common.lang_flag_2);
5357 WB (t->decl_common.lang_flag_3);
5358 WB (t->decl_common.lang_flag_4);
5359 WB (t->decl_common.lang_flag_5);
5360 WB (t->decl_common.lang_flag_6);
5361 WB (t->decl_common.lang_flag_7);
5362 WB (t->decl_common.lang_flag_8);
5363 WB (t->decl_common.decl_flag_0);
5364
5365 {
5366 /* DECL_EXTERNAL -> decl_flag_1
5367 == it is defined elsewhere
5368 DECL_NOT_REALLY_EXTERN -> base.not_really_extern
5369 == that was a lie, it is here */
5370
5371 bool is_external = t->decl_common.decl_flag_1;
5372 if (!is_external)
5373 /* decl_flag_1 is DECL_EXTERNAL. Things we emit here, might
5374 well be external from the POV of an importer. */
5375 // FIXME: Do we need to know if this is a TEMPLATE_RESULT --
5376 // a flag from the caller?
5377 switch (code)
5378 {
5379 default:
5380 break;
5381
5382 case VAR_DECL:
5383 if (TREE_PUBLIC (t)
5384 && !DECL_VAR_DECLARED_INLINE_P (t))
5385 is_external = true;
5386 break;
5387
5388 case FUNCTION_DECL:
5389 if (TREE_PUBLIC (t)
5390 && !DECL_DECLARED_INLINE_P (t))
5391 is_external = true;
5392 break;
5393 }
5394 WB (is_external);
5395 }
5396
5397 WB (t->decl_common.decl_flag_2);
5398 WB (t->decl_common.decl_flag_3);
5399 WB (t->decl_common.not_gimple_reg_flag);
5400 WB (t->decl_common.decl_by_reference_flag);
5401 WB (t->decl_common.decl_read_flag);
5402 WB (t->decl_common.decl_nonshareable_flag);
5403 }
5404
5405 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5406 {
5407 WB (t->decl_with_vis.defer_output);
5408 WB (t->decl_with_vis.hard_register);
5409 WB (t->decl_with_vis.common_flag);
5410 WB (t->decl_with_vis.in_text_section);
5411 WB (t->decl_with_vis.in_constant_pool);
5412 WB (t->decl_with_vis.dllimport_flag);
5413 WB (t->decl_with_vis.weak_flag);
5414 WB (t->decl_with_vis.seen_in_bind_expr);
5415 WB (t->decl_with_vis.comdat_flag);
5416 WB (t->decl_with_vis.visibility_specified);
5417 WB (t->decl_with_vis.init_priority_p);
5418 WB (t->decl_with_vis.shadowed_for_var_p);
5419 WB (t->decl_with_vis.cxx_constructor);
5420 WB (t->decl_with_vis.cxx_destructor);
5421 WB (t->decl_with_vis.final);
5422 WB (t->decl_with_vis.regdecl_flag);
5423 }
5424
5425 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5426 {
5427 WB (t->function_decl.static_ctor_flag);
5428 WB (t->function_decl.static_dtor_flag);
5429 WB (t->function_decl.uninlinable);
5430 WB (t->function_decl.possibly_inlined);
5431 WB (t->function_decl.novops_flag);
5432 WB (t->function_decl.returns_twice_flag);
5433 WB (t->function_decl.malloc_flag);
5434 WB (t->function_decl.declared_inline_flag);
5435 WB (t->function_decl.no_inline_warning_flag);
5436 WB (t->function_decl.no_instrument_function_entry_exit);
5437 WB (t->function_decl.no_limit_stack);
5438 WB (t->function_decl.disregard_inline_limits);
5439 WB (t->function_decl.pure_flag);
5440 WB (t->function_decl.looping_const_or_pure_flag);
5441
5442 WB (t->function_decl.has_debug_args_flag);
5443 WB (t->function_decl.versioned_function);
5444
5445 /* decl_type is a (misnamed) 2 bit discriminator. */
5446 unsigned kind = t->function_decl.decl_type;
5447 WB ((kind >> 0) & 1);
5448 WB ((kind >> 1) & 1);
5449 }
5450 #undef WB
5451 }
5452
5453 bool
5454 trees_in::core_bools (tree t)
5455 {
5456 #define RB(X) ((X) = b ())
5457 tree_code code = TREE_CODE (t);
5458
5459 RB (t->base.side_effects_flag);
5460 RB (t->base.constant_flag);
5461 RB (t->base.addressable_flag);
5462 RB (t->base.volatile_flag);
5463 RB (t->base.readonly_flag);
5464 /* base.asm_written_flag is not streamed. */
5465 RB (t->base.nowarning_flag);
5466 /* base.visited is not streamed. */
5467 /* base.used_flag is not streamed. */
5468 RB (t->base.nothrow_flag);
5469 RB (t->base.static_flag);
5470 if (TREE_CODE_CLASS (code) != tcc_type)
5471 RB (t->base.public_flag);
5472 RB (t->base.private_flag);
5473 RB (t->base.protected_flag);
5474 RB (t->base.deprecated_flag);
5475 RB (t->base.default_def_flag);
5476
5477 switch (code)
5478 {
5479 case CALL_EXPR:
5480 case INTEGER_CST:
5481 case SSA_NAME:
5482 case TARGET_MEM_REF:
5483 case TREE_VEC:
5484 /* These use different base.u fields. */
5485 break;
5486
5487 default:
5488 RB (t->base.u.bits.lang_flag_0);
5489 RB (t->base.u.bits.lang_flag_1);
5490 RB (t->base.u.bits.lang_flag_2);
5491 RB (t->base.u.bits.lang_flag_3);
5492 RB (t->base.u.bits.lang_flag_4);
5493 RB (t->base.u.bits.lang_flag_5);
5494 RB (t->base.u.bits.lang_flag_6);
5495 RB (t->base.u.bits.saturating_flag);
5496 RB (t->base.u.bits.unsigned_flag);
5497 RB (t->base.u.bits.packed_flag);
5498 RB (t->base.u.bits.user_align);
5499 RB (t->base.u.bits.nameless_flag);
5500 RB (t->base.u.bits.atomic_flag);
5501 break;
5502 }
5503
5504 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
5505 {
5506 RB (t->type_common.no_force_blk_flag);
5507 RB (t->type_common.needs_constructing_flag);
5508 RB (t->type_common.transparent_aggr_flag);
5509 RB (t->type_common.restrict_flag);
5510 RB (t->type_common.string_flag);
5511 RB (t->type_common.lang_flag_0);
5512 RB (t->type_common.lang_flag_1);
5513 RB (t->type_common.lang_flag_2);
5514 RB (t->type_common.lang_flag_3);
5515 RB (t->type_common.lang_flag_4);
5516 RB (t->type_common.lang_flag_5);
5517 RB (t->type_common.lang_flag_6);
5518 RB (t->type_common.typeless_storage);
5519 }
5520
5521 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5522 {
5523 RB (t->decl_common.nonlocal_flag);
5524 RB (t->decl_common.virtual_flag);
5525 RB (t->decl_common.ignored_flag);
5526 RB (t->decl_common.abstract_flag);
5527 RB (t->decl_common.artificial_flag);
5528 RB (t->decl_common.preserve_flag);
5529 RB (t->decl_common.debug_expr_is_from);
5530 RB (t->decl_common.lang_flag_0);
5531 RB (t->decl_common.lang_flag_1);
5532 RB (t->decl_common.lang_flag_2);
5533 RB (t->decl_common.lang_flag_3);
5534 RB (t->decl_common.lang_flag_4);
5535 RB (t->decl_common.lang_flag_5);
5536 RB (t->decl_common.lang_flag_6);
5537 RB (t->decl_common.lang_flag_7);
5538 RB (t->decl_common.lang_flag_8);
5539 RB (t->decl_common.decl_flag_0);
5540 RB (t->decl_common.decl_flag_1);
5541 RB (t->decl_common.decl_flag_2);
5542 RB (t->decl_common.decl_flag_3);
5543 RB (t->decl_common.not_gimple_reg_flag);
5544 RB (t->decl_common.decl_by_reference_flag);
5545 RB (t->decl_common.decl_read_flag);
5546 RB (t->decl_common.decl_nonshareable_flag);
5547 }
5548
5549 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5550 {
5551 RB (t->decl_with_vis.defer_output);
5552 RB (t->decl_with_vis.hard_register);
5553 RB (t->decl_with_vis.common_flag);
5554 RB (t->decl_with_vis.in_text_section);
5555 RB (t->decl_with_vis.in_constant_pool);
5556 RB (t->decl_with_vis.dllimport_flag);
5557 RB (t->decl_with_vis.weak_flag);
5558 RB (t->decl_with_vis.seen_in_bind_expr);
5559 RB (t->decl_with_vis.comdat_flag);
5560 RB (t->decl_with_vis.visibility_specified);
5561 RB (t->decl_with_vis.init_priority_p);
5562 RB (t->decl_with_vis.shadowed_for_var_p);
5563 RB (t->decl_with_vis.cxx_constructor);
5564 RB (t->decl_with_vis.cxx_destructor);
5565 RB (t->decl_with_vis.final);
5566 RB (t->decl_with_vis.regdecl_flag);
5567 }
5568
5569 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5570 {
5571 RB (t->function_decl.static_ctor_flag);
5572 RB (t->function_decl.static_dtor_flag);
5573 RB (t->function_decl.uninlinable);
5574 RB (t->function_decl.possibly_inlined);
5575 RB (t->function_decl.novops_flag);
5576 RB (t->function_decl.returns_twice_flag);
5577 RB (t->function_decl.malloc_flag);
5578 RB (t->function_decl.declared_inline_flag);
5579 RB (t->function_decl.no_inline_warning_flag);
5580 RB (t->function_decl.no_instrument_function_entry_exit);
5581 RB (t->function_decl.no_limit_stack);
5582 RB (t->function_decl.disregard_inline_limits);
5583 RB (t->function_decl.pure_flag);
5584 RB (t->function_decl.looping_const_or_pure_flag);
5585
5586 RB (t->function_decl.has_debug_args_flag);
5587 RB (t->function_decl.versioned_function);
5588
5589 /* decl_type is a (misnamed) 2 bit discriminator. */
5590 unsigned kind = 0;
5591 kind |= unsigned (b ()) << 0;
5592 kind |= unsigned (b ()) << 1;
5593 t->function_decl.decl_type = function_decl_type (kind);
5594 }
5595 #undef RB
5596 return !get_overrun ();
5597 }
5598
5599 void
5600 trees_out::lang_decl_bools (tree t)
5601 {
5602 #define WB(X) (b (X))
5603 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5604
5605 WB (lang->u.base.language == lang_cplusplus);
5606 WB ((lang->u.base.use_template >> 0) & 1);
5607 WB ((lang->u.base.use_template >> 1) & 1);
5608 /* Do not write lang->u.base.not_really_extern, importer will set
5609 when reading the definition (if any). */
5610 WB (lang->u.base.initialized_in_class);
5611 WB (lang->u.base.threadprivate_or_deleted_p);
5612 /* Do not write lang->u.base.anticipated_p, it is a property of the
5613 current TU. */
5614 WB (lang->u.base.friend_or_tls);
5615 WB (lang->u.base.unknown_bound_p);
5616 /* Do not write lang->u.base.odr_used, importer will recalculate if
5617 they do ODR use this decl. */
5618 WB (lang->u.base.concept_p);
5619 WB (lang->u.base.var_declared_inline_p);
5620 WB (lang->u.base.dependent_init_p);
5621 WB (lang->u.base.module_purview_p);
5622 if (VAR_OR_FUNCTION_DECL_P (t))
5623 WB (lang->u.base.module_pending_p);
5624 switch (lang->u.base.selector)
5625 {
5626 default:
5627 gcc_unreachable ();
5628
5629 case lds_fn: /* lang_decl_fn. */
5630 WB (lang->u.fn.global_ctor_p);
5631 WB (lang->u.fn.global_dtor_p);
5632 WB (lang->u.fn.static_function);
5633 WB (lang->u.fn.pure_virtual);
5634 WB (lang->u.fn.defaulted_p);
5635 WB (lang->u.fn.has_in_charge_parm_p);
5636 WB (lang->u.fn.has_vtt_parm_p);
5637 /* There shouldn't be a pending inline at this point. */
5638 gcc_assert (!lang->u.fn.pending_inline_p);
5639 WB (lang->u.fn.nonconverting);
5640 WB (lang->u.fn.thunk_p);
5641 WB (lang->u.fn.this_thunk_p);
5642 /* Do not stream lang->u.hidden_friend_p, it is a property of
5643 the TU. */
5644 WB (lang->u.fn.omp_declare_reduction_p);
5645 WB (lang->u.fn.has_dependent_explicit_spec_p);
5646 WB (lang->u.fn.immediate_fn_p);
5647 WB (lang->u.fn.maybe_deleted);
5648 goto lds_min;
5649
5650 case lds_decomp: /* lang_decl_decomp. */
5651 /* No bools. */
5652 goto lds_min;
5653
5654 case lds_min: /* lang_decl_min. */
5655 lds_min:
5656 /* No bools. */
5657 break;
5658
5659 case lds_ns: /* lang_decl_ns. */
5660 /* No bools. */
5661 break;
5662
5663 case lds_parm: /* lang_decl_parm. */
5664 /* No bools. */
5665 break;
5666 }
5667 #undef WB
5668 }
5669
5670 bool
5671 trees_in::lang_decl_bools (tree t)
5672 {
5673 #define RB(X) ((X) = b ())
5674 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5675
5676 lang->u.base.language = b () ? lang_cplusplus : lang_c;
5677 unsigned v;
5678 v = b () << 0;
5679 v |= b () << 1;
5680 lang->u.base.use_template = v;
5681 /* lang->u.base.not_really_extern is not streamed. */
5682 RB (lang->u.base.initialized_in_class);
5683 RB (lang->u.base.threadprivate_or_deleted_p);
5684 /* lang->u.base.anticipated_p is not streamed. */
5685 RB (lang->u.base.friend_or_tls);
5686 RB (lang->u.base.unknown_bound_p);
5687 /* lang->u.base.odr_used is not streamed. */
5688 RB (lang->u.base.concept_p);
5689 RB (lang->u.base.var_declared_inline_p);
5690 RB (lang->u.base.dependent_init_p);
5691 RB (lang->u.base.module_purview_p);
5692 if (VAR_OR_FUNCTION_DECL_P (t))
5693 RB (lang->u.base.module_pending_p);
5694 switch (lang->u.base.selector)
5695 {
5696 default:
5697 gcc_unreachable ();
5698
5699 case lds_fn: /* lang_decl_fn. */
5700 RB (lang->u.fn.global_ctor_p);
5701 RB (lang->u.fn.global_dtor_p);
5702 RB (lang->u.fn.static_function);
5703 RB (lang->u.fn.pure_virtual);
5704 RB (lang->u.fn.defaulted_p);
5705 RB (lang->u.fn.has_in_charge_parm_p);
5706 RB (lang->u.fn.has_vtt_parm_p);
5707 RB (lang->u.fn.nonconverting);
5708 RB (lang->u.fn.thunk_p);
5709 RB (lang->u.fn.this_thunk_p);
5710 /* lang->u.fn.hidden_friend_p is not streamed. */
5711 RB (lang->u.fn.omp_declare_reduction_p);
5712 RB (lang->u.fn.has_dependent_explicit_spec_p);
5713 RB (lang->u.fn.immediate_fn_p);
5714 RB (lang->u.fn.maybe_deleted);
5715 goto lds_min;
5716
5717 case lds_decomp: /* lang_decl_decomp. */
5718 /* No bools. */
5719 goto lds_min;
5720
5721 case lds_min: /* lang_decl_min. */
5722 lds_min:
5723 /* No bools. */
5724 break;
5725
5726 case lds_ns: /* lang_decl_ns. */
5727 /* No bools. */
5728 break;
5729
5730 case lds_parm: /* lang_decl_parm. */
5731 /* No bools. */
5732 break;
5733 }
5734 #undef RB
5735 return !get_overrun ();
5736 }
5737
5738 void
5739 trees_out::lang_type_bools (tree t)
5740 {
5741 #define WB(X) (b (X))
5742 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5743
5744 WB (lang->has_type_conversion);
5745 WB (lang->has_copy_ctor);
5746 WB (lang->has_default_ctor);
5747 WB (lang->const_needs_init);
5748 WB (lang->ref_needs_init);
5749 WB (lang->has_const_copy_assign);
5750 WB ((lang->use_template >> 0) & 1);
5751 WB ((lang->use_template >> 1) & 1);
5752
5753 WB (lang->has_mutable);
5754 WB (lang->com_interface);
5755 WB (lang->non_pod_class);
5756 WB (lang->nearly_empty_p);
5757 WB (lang->user_align);
5758 WB (lang->has_copy_assign);
5759 WB (lang->has_new);
5760 WB (lang->has_array_new);
5761
5762 WB ((lang->gets_delete >> 0) & 1);
5763 WB ((lang->gets_delete >> 1) & 1);
5764 // Interfaceness is recalculated upon reading. May have to revisit?
5765 // How do dllexport and dllimport interact across a module?
5766 // lang->interface_only
5767 // lang->interface_unknown
5768 WB (lang->contains_empty_class_p);
5769 WB (lang->anon_aggr);
5770 WB (lang->non_zero_init);
5771 WB (lang->empty_p);
5772
5773 WB (lang->vec_new_uses_cookie);
5774 WB (lang->declared_class);
5775 WB (lang->diamond_shaped);
5776 WB (lang->repeated_base);
5777 gcc_assert (!lang->being_defined);
5778 // lang->debug_requested
5779 WB (lang->fields_readonly);
5780 WB (lang->ptrmemfunc_flag);
5781
5782 WB (lang->lazy_default_ctor);
5783 WB (lang->lazy_copy_ctor);
5784 WB (lang->lazy_copy_assign);
5785 WB (lang->lazy_destructor);
5786 WB (lang->has_const_copy_ctor);
5787 WB (lang->has_complex_copy_ctor);
5788 WB (lang->has_complex_copy_assign);
5789 WB (lang->non_aggregate);
5790
5791 WB (lang->has_complex_dflt);
5792 WB (lang->has_list_ctor);
5793 WB (lang->non_std_layout);
5794 WB (lang->is_literal);
5795 WB (lang->lazy_move_ctor);
5796 WB (lang->lazy_move_assign);
5797 WB (lang->has_complex_move_ctor);
5798 WB (lang->has_complex_move_assign);
5799
5800 WB (lang->has_constexpr_ctor);
5801 WB (lang->unique_obj_representations);
5802 WB (lang->unique_obj_representations_set);
5803 #undef WB
5804 }
5805
5806 bool
5807 trees_in::lang_type_bools (tree t)
5808 {
5809 #define RB(X) ((X) = b ())
5810 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5811
5812 RB (lang->has_type_conversion);
5813 RB (lang->has_copy_ctor);
5814 RB (lang->has_default_ctor);
5815 RB (lang->const_needs_init);
5816 RB (lang->ref_needs_init);
5817 RB (lang->has_const_copy_assign);
5818 unsigned v;
5819 v = b () << 0;
5820 v |= b () << 1;
5821 lang->use_template = v;
5822
5823 RB (lang->has_mutable);
5824 RB (lang->com_interface);
5825 RB (lang->non_pod_class);
5826 RB (lang->nearly_empty_p);
5827 RB (lang->user_align);
5828 RB (lang->has_copy_assign);
5829 RB (lang->has_new);
5830 RB (lang->has_array_new);
5831
5832 v = b () << 0;
5833 v |= b () << 1;
5834 lang->gets_delete = v;
5835 // lang->interface_only
5836 // lang->interface_unknown
5837 lang->interface_unknown = true; // Redetermine interface
5838 RB (lang->contains_empty_class_p);
5839 RB (lang->anon_aggr);
5840 RB (lang->non_zero_init);
5841 RB (lang->empty_p);
5842
5843 RB (lang->vec_new_uses_cookie);
5844 RB (lang->declared_class);
5845 RB (lang->diamond_shaped);
5846 RB (lang->repeated_base);
5847 gcc_assert (!lang->being_defined);
5848 gcc_assert (!lang->debug_requested);
5849 RB (lang->fields_readonly);
5850 RB (lang->ptrmemfunc_flag);
5851
5852 RB (lang->lazy_default_ctor);
5853 RB (lang->lazy_copy_ctor);
5854 RB (lang->lazy_copy_assign);
5855 RB (lang->lazy_destructor);
5856 RB (lang->has_const_copy_ctor);
5857 RB (lang->has_complex_copy_ctor);
5858 RB (lang->has_complex_copy_assign);
5859 RB (lang->non_aggregate);
5860
5861 RB (lang->has_complex_dflt);
5862 RB (lang->has_list_ctor);
5863 RB (lang->non_std_layout);
5864 RB (lang->is_literal);
5865 RB (lang->lazy_move_ctor);
5866 RB (lang->lazy_move_assign);
5867 RB (lang->has_complex_move_ctor);
5868 RB (lang->has_complex_move_assign);
5869
5870 RB (lang->has_constexpr_ctor);
5871 RB (lang->unique_obj_representations);
5872 RB (lang->unique_obj_representations_set);
5873 #undef RB
5874 return !get_overrun ();
5875 }
5876
5877 /* Read & write the core values and pointers. */
5878
5879 void
5880 trees_out::core_vals (tree t)
5881 {
5882 #define WU(X) (u (X))
5883 #define WT(X) (tree_node (X))
5884 tree_code code = TREE_CODE (t);
5885
5886 /* First by shape of the tree. */
5887
5888 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
5889 {
5890 /* Write this early, for better log information. */
5891 WT (t->decl_minimal.name);
5892 if (!DECL_TEMPLATE_PARM_P (t))
5893 WT (t->decl_minimal.context);
5894
5895 state->write_location (*this, t->decl_minimal.locus);
5896 }
5897
5898 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
5899 {
5900 /* The only types we write also have TYPE_NON_COMMON. */
5901 gcc_checking_assert (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON));
5902
5903 /* We only stream the main variant. */
5904 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
5905
5906 /* Stream the name & context first, for better log information */
5907 WT (t->type_common.name);
5908 WT (t->type_common.context);
5909
5910 /* By construction we want to make sure we have the canonical
5911 and main variants already in the type table, so emit them
5912 now. */
5913 WT (t->type_common.main_variant);
5914
5915 tree canonical = t->type_common.canonical;
5916 if (canonical && DECL_TEMPLATE_PARM_P (TYPE_NAME (t)))
5917 /* We do not want to wander into different templates.
5918 Reconstructed on stream in. */
5919 canonical = t;
5920 WT (canonical);
5921
5922 /* type_common.next_variant is internally manipulated. */
5923 /* type_common.pointer_to, type_common.reference_to. */
5924
5925 if (streaming_p ())
5926 {
5927 WU (t->type_common.precision);
5928 WU (t->type_common.contains_placeholder_bits);
5929 WU (t->type_common.mode);
5930 WU (t->type_common.align);
5931 }
5932
5933 if (!RECORD_OR_UNION_CODE_P (code))
5934 {
5935 WT (t->type_common.size);
5936 WT (t->type_common.size_unit);
5937 }
5938 WT (t->type_common.attributes);
5939
5940 WT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
5941 }
5942
5943 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5944 {
5945 if (streaming_p ())
5946 {
5947 WU (t->decl_common.mode);
5948 WU (t->decl_common.off_align);
5949 WU (t->decl_common.align);
5950 }
5951
5952 /* For templates these hold instantiation (partial and/or
5953 specialization) information. */
5954 if (code != TEMPLATE_DECL)
5955 {
5956 WT (t->decl_common.size);
5957 WT (t->decl_common.size_unit);
5958 }
5959
5960 WT (t->decl_common.attributes);
5961 // FIXME: Does this introduce cross-decl links? For instance
5962 // from instantiation to the template. If so, we'll need more
5963 // deduplication logic. I think we'll need to walk the blocks
5964 // of the owning function_decl's abstract origin in tandem, to
5965 // generate the locating data needed?
5966 WT (t->decl_common.abstract_origin);
5967 }
5968
5969 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5970 {
5971 WT (t->decl_with_vis.assembler_name);
5972 if (streaming_p ())
5973 WU (t->decl_with_vis.visibility);
5974 }
5975
5976 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
5977 {
5978 /* Records and unions hold FIELDS, VFIELD & BINFO on these
5979 things. */
5980 if (!RECORD_OR_UNION_CODE_P (code) && code != ENUMERAL_TYPE)
5981 {
5982 // FIXME: These are from tpl_parm_value's 'type' writing.
5983 // Perhaps it should just be doing them directly?
5984 gcc_checking_assert (code == TEMPLATE_TYPE_PARM
5985 || code == TEMPLATE_TEMPLATE_PARM
5986 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
5987 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
5988 WT (t->type_non_common.values);
5989 WT (t->type_non_common.maxval);
5990 WT (t->type_non_common.minval);
5991 }
5992
5993 WT (t->type_non_common.lang_1);
5994 }
5995
5996 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
5997 {
5998 state->write_location (*this, t->exp.locus);
5999
6000 /* Walk in forward order, as (for instance) REQUIRES_EXPR has a
6001 bunch of unscoped parms on its first operand. It's safer to
6002 create those in order. */
6003 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6004 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6005 : TREE_OPERAND_LENGTH (t)),
6006 ix = unsigned (vl); ix != limit; ix++)
6007 WT (TREE_OPERAND (t, ix));
6008 }
6009 else
6010 /* The CODE_CONTAINS tables were inaccurate when I started. */
6011 gcc_checking_assert (TREE_CODE_CLASS (code) != tcc_expression
6012 && TREE_CODE_CLASS (code) != tcc_binary
6013 && TREE_CODE_CLASS (code) != tcc_unary
6014 && TREE_CODE_CLASS (code) != tcc_reference
6015 && TREE_CODE_CLASS (code) != tcc_comparison
6016 && TREE_CODE_CLASS (code) != tcc_statement
6017 && TREE_CODE_CLASS (code) != tcc_vl_exp);
6018
6019 /* Then by CODE. Special cases and/or 1:1 tree shape
6020 correspondance. */
6021 switch (code)
6022 {
6023 default:
6024 break;
6025
6026 case ARGUMENT_PACK_SELECT: /* Transient during instantiation. */
6027 case DEFERRED_PARSE: /* Expanded upon completion of
6028 outermost class. */
6029 case IDENTIFIER_NODE: /* Streamed specially. */
6030 case BINDING_VECTOR: /* Only in namespace-scope symbol
6031 table. */
6032 case SSA_NAME:
6033 case TRANSLATION_UNIT_DECL: /* There is only one, it is a
6034 global_tree. */
6035 case USERDEF_LITERAL: /* Expanded during parsing. */
6036 gcc_unreachable (); /* Should never meet. */
6037
6038 /* Constants. */
6039 case COMPLEX_CST:
6040 WT (TREE_REALPART (t));
6041 WT (TREE_IMAGPART (t));
6042 break;
6043
6044 case FIXED_CST:
6045 gcc_unreachable (); /* Not supported in C++. */
6046
6047 case INTEGER_CST:
6048 if (streaming_p ())
6049 {
6050 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6051 for (unsigned ix = 0; ix != num; ix++)
6052 wu (TREE_INT_CST_ELT (t, ix));
6053 }
6054 break;
6055
6056 case POLY_INT_CST:
6057 gcc_unreachable (); /* Not supported in C++. */
6058
6059 case REAL_CST:
6060 if (streaming_p ())
6061 buf (TREE_REAL_CST_PTR (t), sizeof (real_value));
6062 break;
6063
6064 case STRING_CST:
6065 /* Streamed during start. */
6066 break;
6067
6068 case VECTOR_CST:
6069 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6070 WT (VECTOR_CST_ENCODED_ELT (t, ix));
6071 break;
6072
6073 /* Decls. */
6074 case VAR_DECL:
6075 if (DECL_CONTEXT (t)
6076 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6077 break;
6078 /* FALLTHROUGH */
6079
6080 case RESULT_DECL:
6081 case PARM_DECL:
6082 if (DECL_HAS_VALUE_EXPR_P (t))
6083 WT (DECL_VALUE_EXPR (t));
6084 /* FALLTHROUGH */
6085
6086 case CONST_DECL:
6087 case IMPORTED_DECL:
6088 WT (t->decl_common.initial);
6089 break;
6090
6091 case FIELD_DECL:
6092 WT (t->field_decl.offset);
6093 WT (t->field_decl.bit_field_type);
6094 WT (t->field_decl.qualifier); /* bitfield unit. */
6095 WT (t->field_decl.bit_offset);
6096 WT (t->field_decl.fcontext);
6097 WT (t->decl_common.initial);
6098 break;
6099
6100 case LABEL_DECL:
6101 if (streaming_p ())
6102 {
6103 WU (t->label_decl.label_decl_uid);
6104 WU (t->label_decl.eh_landing_pad_nr);
6105 }
6106 break;
6107
6108 case FUNCTION_DECL:
6109 if (streaming_p ())
6110 {
6111 /* Builtins can be streamed by value when a header declares
6112 them. */
6113 WU (DECL_BUILT_IN_CLASS (t));
6114 if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
6115 WU (DECL_UNCHECKED_FUNCTION_CODE (t));
6116 }
6117
6118 WT (t->function_decl.personality);
6119 WT (t->function_decl.function_specific_target);
6120 WT (t->function_decl.function_specific_optimization);
6121 WT (t->function_decl.vindex);
6122 break;
6123
6124 case USING_DECL:
6125 /* USING_DECL_DECLS */
6126 WT (t->decl_common.initial);
6127 /* FALLTHROUGH */
6128
6129 case TYPE_DECL:
6130 /* USING_DECL: USING_DECL_SCOPE */
6131 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6132 WT (t->decl_non_common.result);
6133 break;
6134
6135 /* Miscellaneous common nodes. */
6136 case BLOCK:
6137 state->write_location (*this, t->block.locus);
6138 state->write_location (*this, t->block.end_locus);
6139
6140 /* DECL_LOCAL_DECL_P decls are first encountered here and
6141 streamed by value. */
6142 chained_decls (t->block.vars);
6143 /* nonlocalized_vars is a middle-end thing. */
6144 WT (t->block.subblocks);
6145 WT (t->block.supercontext);
6146 // FIXME: As for decl's abstract_origin, does this introduce crosslinks?
6147 WT (t->block.abstract_origin);
6148 /* fragment_origin, fragment_chain are middle-end things. */
6149 WT (t->block.chain);
6150 /* nonlocalized_vars, block_num & die are middle endy/debug
6151 things. */
6152 break;
6153
6154 case CALL_EXPR:
6155 if (streaming_p ())
6156 WU (t->base.u.ifn);
6157 break;
6158
6159 case CONSTRUCTOR:
6160 {
6161 unsigned len = vec_safe_length (t->constructor.elts);
6162 if (streaming_p ())
6163 WU (len);
6164 if (len)
6165 for (unsigned ix = 0; ix != len; ix++)
6166 {
6167 const constructor_elt &elt = (*t->constructor.elts)[ix];
6168
6169 WT (elt.index);
6170 WT (elt.value);
6171 }
6172 }
6173 break;
6174
6175 case OMP_CLAUSE:
6176 {
6177 /* The ompcode is serialized in start. */
6178 if (streaming_p ())
6179 WU (t->omp_clause.subcode.map_kind);
6180 state->write_location (*this, t->omp_clause.locus);
6181
6182 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6183 for (unsigned ix = 0; ix != len; ix++)
6184 WT (t->omp_clause.ops[ix]);
6185 }
6186 break;
6187
6188 case STATEMENT_LIST:
6189 for (tree_stmt_iterator iter = tsi_start (t);
6190 !tsi_end_p (iter); tsi_next (&iter))
6191 if (tree stmt = tsi_stmt (iter))
6192 WT (stmt);
6193 WT (NULL_TREE);
6194 break;
6195
6196 case OPTIMIZATION_NODE:
6197 case TARGET_OPTION_NODE:
6198 // FIXME: Our representation for these two nodes is a cache of
6199 // the resulting set of options. Not a record of the options
6200 // that got changed by a particular attribute or pragma. Should
6201 // we record that, or should we record the diff from the command
6202 // line options? The latter seems the right behaviour, but is
6203 // (a) harder, and I guess could introduce strangeness if the
6204 // importer has set some incompatible set of optimization flags?
6205 gcc_unreachable ();
6206 break;
6207
6208 case TREE_BINFO:
6209 {
6210 WT (t->binfo.common.chain);
6211 WT (t->binfo.offset);
6212 WT (t->binfo.inheritance);
6213 WT (t->binfo.vptr_field);
6214
6215 WT (t->binfo.vtable);
6216 WT (t->binfo.virtuals);
6217 WT (t->binfo.vtt_subvtt);
6218 WT (t->binfo.vtt_vptr);
6219
6220 tree_vec (BINFO_BASE_ACCESSES (t));
6221 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6222 for (unsigned ix = 0; ix != num; ix++)
6223 WT (BINFO_BASE_BINFO (t, ix));
6224 }
6225 break;
6226
6227 case TREE_LIST:
6228 WT (t->list.purpose);
6229 WT (t->list.value);
6230 WT (t->list.common.chain);
6231 break;
6232
6233 case TREE_VEC:
6234 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6235 WT (TREE_VEC_ELT (t, ix));
6236 /* We stash NON_DEFAULT_TEMPLATE_ARGS_COUNT on TREE_CHAIN! */
6237 gcc_checking_assert (!t->type_common.common.chain
6238 || (TREE_CODE (t->type_common.common.chain)
6239 == INTEGER_CST));
6240 WT (t->type_common.common.chain);
6241 break;
6242
6243 /* C++-specific nodes ... */
6244 case BASELINK:
6245 WT (((lang_tree_node *)t)->baselink.binfo);
6246 WT (((lang_tree_node *)t)->baselink.functions);
6247 WT (((lang_tree_node *)t)->baselink.access_binfo);
6248 break;
6249
6250 case CONSTRAINT_INFO:
6251 WT (((lang_tree_node *)t)->constraint_info.template_reqs);
6252 WT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6253 WT (((lang_tree_node *)t)->constraint_info.associated_constr);
6254 break;
6255
6256 case DEFERRED_NOEXCEPT:
6257 WT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6258 WT (((lang_tree_node *)t)->deferred_noexcept.args);
6259 break;
6260
6261 case LAMBDA_EXPR:
6262 WT (((lang_tree_node *)t)->lambda_expression.capture_list);
6263 WT (((lang_tree_node *)t)->lambda_expression.this_capture);
6264 WT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6265 /* pending_proxies is a parse-time thing. */
6266 gcc_assert (!((lang_tree_node *)t)->lambda_expression.pending_proxies);
6267 state->write_location
6268 (*this, ((lang_tree_node *)t)->lambda_expression.locus);
6269 if (streaming_p ())
6270 {
6271 WU (((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6272 WU (((lang_tree_node *)t)->lambda_expression.discriminator);
6273 }
6274 break;
6275
6276 case OVERLOAD:
6277 WT (((lang_tree_node *)t)->overload.function);
6278 WT (t->common.chain);
6279 break;
6280
6281 case PTRMEM_CST:
6282 WT (((lang_tree_node *)t)->ptrmem.member);
6283 break;
6284
6285 case STATIC_ASSERT:
6286 WT (((lang_tree_node *)t)->static_assertion.condition);
6287 WT (((lang_tree_node *)t)->static_assertion.message);
6288 state->write_location
6289 (*this, ((lang_tree_node *)t)->static_assertion.location);
6290 break;
6291
6292 case TEMPLATE_DECL:
6293 /* Streamed with the template_decl node itself. */
6294 gcc_checking_assert
6295 (TREE_VISITED (((lang_tree_node *)t)->template_decl.arguments));
6296 gcc_checking_assert
6297 (TREE_VISITED (((lang_tree_node *)t)->template_decl.result)
6298 || dep_hash->find_dependency (t)->is_alias_tmpl_inst ());
6299 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6300 WT (DECL_CHAIN (t));
6301 break;
6302
6303 case TEMPLATE_INFO:
6304 {
6305 WT (((lang_tree_node *)t)->template_info.tmpl);
6306 WT (((lang_tree_node *)t)->template_info.args);
6307
6308 const auto *ac = (((lang_tree_node *)t)
6309 ->template_info.deferred_access_checks);
6310 unsigned len = vec_safe_length (ac);
6311 if (streaming_p ())
6312 u (len);
6313 if (len)
6314 {
6315 for (unsigned ix = 0; ix != len; ix++)
6316 {
6317 const auto &m = (*ac)[ix];
6318 WT (m.binfo);
6319 WT (m.decl);
6320 WT (m.diag_decl);
6321 state->write_location (*this, m.loc);
6322 }
6323 }
6324 }
6325 break;
6326
6327 case TEMPLATE_PARM_INDEX:
6328 if (streaming_p ())
6329 {
6330 WU (((lang_tree_node *)t)->tpi.index);
6331 WU (((lang_tree_node *)t)->tpi.level);
6332 WU (((lang_tree_node *)t)->tpi.orig_level);
6333 }
6334 WT (((lang_tree_node *)t)->tpi.decl);
6335 /* TEMPLATE_PARM_DESCENDANTS (AKA TREE_CHAIN) is an internal
6336 cache, do not stream. */
6337 break;
6338
6339 case TRAIT_EXPR:
6340 WT (((lang_tree_node *)t)->trait_expression.type1);
6341 WT (((lang_tree_node *)t)->trait_expression.type2);
6342 if (streaming_p ())
6343 WU (((lang_tree_node *)t)->trait_expression.kind);
6344 break;
6345 }
6346
6347 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6348 {
6349 /* We want to stream the type of a expression-like nodes /after/
6350 we've streamed the operands. The type often contains (bits
6351 of the) types of the operands, and with things like decltype
6352 and noexcept in play, we really want to stream the decls
6353 defining the type before we try and stream the type on its
6354 own. Otherwise we can find ourselves trying to read in a
6355 decl, when we're already partially reading in a component of
6356 its type. And that's bad. */
6357 tree type = t->typed.type;
6358 unsigned prec = 0;
6359
6360 switch (code)
6361 {
6362 default:
6363 break;
6364
6365 case TEMPLATE_DECL:
6366 /* We fill in the template's type separately. */
6367 type = NULL_TREE;
6368 break;
6369
6370 case TYPE_DECL:
6371 if (DECL_ORIGINAL_TYPE (t) && t == TYPE_NAME (type))
6372 /* This is a typedef. We set its type separately. */
6373 type = NULL_TREE;
6374 break;
6375
6376 case ENUMERAL_TYPE:
6377 if (type && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6378 {
6379 /* Type is a restricted range integer type derived from the
6380 integer_types. Find the right one. */
6381 prec = TYPE_PRECISION (type);
6382 tree name = DECL_NAME (TYPE_NAME (type));
6383
6384 for (unsigned itk = itk_none; itk--;)
6385 if (integer_types[itk]
6386 && DECL_NAME (TYPE_NAME (integer_types[itk])) == name)
6387 {
6388 type = integer_types[itk];
6389 break;
6390 }
6391 gcc_assert (type != t->typed.type);
6392 }
6393 break;
6394 }
6395
6396 WT (type);
6397 if (prec && streaming_p ())
6398 WU (prec);
6399 }
6400
6401 #undef WT
6402 #undef WU
6403 }
6404
6405 // Streaming in a reference to a decl can cause that decl to be
6406 // TREE_USED, which is the mark_used behaviour we need most of the
6407 // time. The trees_in::unused can be incremented to inhibit this,
6408 // which is at least needed for vtables.
6409
6410 bool
6411 trees_in::core_vals (tree t)
6412 {
6413 #define RU(X) ((X) = u ())
6414 #define RUC(T,X) ((X) = T (u ()))
6415 #define RT(X) ((X) = tree_node ())
6416 #define RTU(X) ((X) = tree_node (true))
6417 tree_code code = TREE_CODE (t);
6418
6419 /* First by tree shape. */
6420 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
6421 {
6422 RT (t->decl_minimal.name);
6423 if (!DECL_TEMPLATE_PARM_P (t))
6424 RT (t->decl_minimal.context);
6425
6426 /* Don't zap the locus just yet, we don't record it correctly
6427 and thus lose all location information. */
6428 t->decl_minimal.locus = state->read_location (*this);
6429 }
6430
6431 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
6432 {
6433 RT (t->type_common.name);
6434 RT (t->type_common.context);
6435
6436 RT (t->type_common.main_variant);
6437 RT (t->type_common.canonical);
6438
6439 /* type_common.next_variant is internally manipulated. */
6440 /* type_common.pointer_to, type_common.reference_to. */
6441
6442 RU (t->type_common.precision);
6443 RU (t->type_common.contains_placeholder_bits);
6444 RUC (machine_mode, t->type_common.mode);
6445 RU (t->type_common.align);
6446
6447 if (!RECORD_OR_UNION_CODE_P (code))
6448 {
6449 RT (t->type_common.size);
6450 RT (t->type_common.size_unit);
6451 }
6452 RT (t->type_common.attributes);
6453
6454 RT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
6455 }
6456
6457 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
6458 {
6459 RUC (machine_mode, t->decl_common.mode);
6460 RU (t->decl_common.off_align);
6461 RU (t->decl_common.align);
6462
6463 if (code != TEMPLATE_DECL)
6464 {
6465 RT (t->decl_common.size);
6466 RT (t->decl_common.size_unit);
6467 }
6468
6469 RT (t->decl_common.attributes);
6470 RT (t->decl_common.abstract_origin);
6471 }
6472
6473 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
6474 {
6475 RT (t->decl_with_vis.assembler_name);
6476 RUC (symbol_visibility, t->decl_with_vis.visibility);
6477 }
6478
6479 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
6480 {
6481 /* Records and unions hold FIELDS, VFIELD & BINFO on these
6482 things. */
6483 if (!RECORD_OR_UNION_CODE_P (code) && code != ENUMERAL_TYPE)
6484 {
6485 /* This is not clobbering TYPE_CACHED_VALUES, because this
6486 is a type that doesn't have any. */
6487 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
6488 RT (t->type_non_common.values);
6489 RT (t->type_non_common.maxval);
6490 RT (t->type_non_common.minval);
6491 }
6492
6493 RT (t->type_non_common.lang_1);
6494 }
6495
6496 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6497 {
6498 t->exp.locus = state->read_location (*this);
6499
6500 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6501 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6502 : TREE_OPERAND_LENGTH (t)),
6503 ix = unsigned (vl); ix != limit; ix++)
6504 RTU (TREE_OPERAND (t, ix));
6505 }
6506
6507 /* Then by CODE. Special cases and/or 1:1 tree shape
6508 correspondance. */
6509 switch (code)
6510 {
6511 default:
6512 break;
6513
6514 case ARGUMENT_PACK_SELECT:
6515 case DEFERRED_PARSE:
6516 case IDENTIFIER_NODE:
6517 case BINDING_VECTOR:
6518 case SSA_NAME:
6519 case TRANSLATION_UNIT_DECL:
6520 case USERDEF_LITERAL:
6521 return false; /* Should never meet. */
6522
6523 /* Constants. */
6524 case COMPLEX_CST:
6525 RT (TREE_REALPART (t));
6526 RT (TREE_IMAGPART (t));
6527 break;
6528
6529 case FIXED_CST:
6530 /* Not suported in C++. */
6531 return false;
6532
6533 case INTEGER_CST:
6534 {
6535 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6536 for (unsigned ix = 0; ix != num; ix++)
6537 TREE_INT_CST_ELT (t, ix) = wu ();
6538 }
6539 break;
6540
6541 case POLY_INT_CST:
6542 /* Not suported in C++. */
6543 return false;
6544
6545 case REAL_CST:
6546 if (const void *bytes = buf (sizeof (real_value)))
6547 TREE_REAL_CST_PTR (t)
6548 = reinterpret_cast<real_value *> (memcpy (ggc_alloc<real_value> (),
6549 bytes, sizeof (real_value)));
6550 break;
6551
6552 case STRING_CST:
6553 /* Streamed during start. */
6554 break;
6555
6556 case VECTOR_CST:
6557 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6558 RT (VECTOR_CST_ENCODED_ELT (t, ix));
6559 break;
6560
6561 /* Decls. */
6562 case VAR_DECL:
6563 if (DECL_CONTEXT (t)
6564 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6565 break;
6566 /* FALLTHROUGH */
6567
6568 case RESULT_DECL:
6569 case PARM_DECL:
6570 if (DECL_HAS_VALUE_EXPR_P (t))
6571 {
6572 /* The DECL_VALUE hash table is a cache, thus if we're
6573 reading a duplicate (which we end up discarding), the
6574 value expr will also be cleaned up at the next gc. */
6575 tree val = tree_node ();
6576 SET_DECL_VALUE_EXPR (t, val);
6577 }
6578 /* FALLTHROUGH */
6579
6580 case CONST_DECL:
6581 case IMPORTED_DECL:
6582 RT (t->decl_common.initial);
6583 break;
6584
6585 case FIELD_DECL:
6586 RT (t->field_decl.offset);
6587 RT (t->field_decl.bit_field_type);
6588 RT (t->field_decl.qualifier);
6589 RT (t->field_decl.bit_offset);
6590 RT (t->field_decl.fcontext);
6591 RT (t->decl_common.initial);
6592 break;
6593
6594 case LABEL_DECL:
6595 RU (t->label_decl.label_decl_uid);
6596 RU (t->label_decl.eh_landing_pad_nr);
6597 break;
6598
6599 case FUNCTION_DECL:
6600 {
6601 unsigned bltin = u ();
6602 t->function_decl.built_in_class = built_in_class (bltin);
6603 if (bltin != NOT_BUILT_IN)
6604 {
6605 bltin = u ();
6606 DECL_UNCHECKED_FUNCTION_CODE (t) = built_in_function (bltin);
6607 }
6608
6609 RT (t->function_decl.personality);
6610 RT (t->function_decl.function_specific_target);
6611 RT (t->function_decl.function_specific_optimization);
6612 RT (t->function_decl.vindex);
6613 }
6614 break;
6615
6616 case USING_DECL:
6617 /* USING_DECL_DECLS */
6618 RT (t->decl_common.initial);
6619 /* FALLTHROUGH */
6620
6621 case TYPE_DECL:
6622 /* USING_DECL: USING_DECL_SCOPE */
6623 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6624 RT (t->decl_non_common.result);
6625 break;
6626
6627 /* Miscellaneous common nodes. */
6628 case BLOCK:
6629 t->block.locus = state->read_location (*this);
6630 t->block.end_locus = state->read_location (*this);
6631 t->block.vars = chained_decls ();
6632 /* nonlocalized_vars is middle-end. */
6633 RT (t->block.subblocks);
6634 RT (t->block.supercontext);
6635 RT (t->block.abstract_origin);
6636 /* fragment_origin, fragment_chain are middle-end. */
6637 RT (t->block.chain);
6638 /* nonlocalized_vars, block_num, die are middle endy/debug
6639 things. */
6640 break;
6641
6642 case CALL_EXPR:
6643 RUC (internal_fn, t->base.u.ifn);
6644 break;
6645
6646 case CONSTRUCTOR:
6647 if (unsigned len = u ())
6648 {
6649 vec_alloc (t->constructor.elts, len);
6650 for (unsigned ix = 0; ix != len; ix++)
6651 {
6652 constructor_elt elt;
6653
6654 RT (elt.index);
6655 RTU (elt.value);
6656 t->constructor.elts->quick_push (elt);
6657 }
6658 }
6659 break;
6660
6661 case OMP_CLAUSE:
6662 {
6663 RU (t->omp_clause.subcode.map_kind);
6664 t->omp_clause.locus = state->read_location (*this);
6665
6666 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6667 for (unsigned ix = 0; ix != len; ix++)
6668 RT (t->omp_clause.ops[ix]);
6669 }
6670 break;
6671
6672 case STATEMENT_LIST:
6673 {
6674 tree_stmt_iterator iter = tsi_start (t);
6675 for (tree stmt; RT (stmt);)
6676 tsi_link_after (&iter, stmt, TSI_CONTINUE_LINKING);
6677 }
6678 break;
6679
6680 case OPTIMIZATION_NODE:
6681 case TARGET_OPTION_NODE:
6682 /* Not yet implemented, see trees_out::core_vals. */
6683 gcc_unreachable ();
6684 break;
6685
6686 case TREE_BINFO:
6687 RT (t->binfo.common.chain);
6688 RT (t->binfo.offset);
6689 RT (t->binfo.inheritance);
6690 RT (t->binfo.vptr_field);
6691
6692 /* Do not mark the vtables as USED in the address expressions
6693 here. */
6694 unused++;
6695 RT (t->binfo.vtable);
6696 RT (t->binfo.virtuals);
6697 RT (t->binfo.vtt_subvtt);
6698 RT (t->binfo.vtt_vptr);
6699 unused--;
6700
6701 BINFO_BASE_ACCESSES (t) = tree_vec ();
6702 if (!get_overrun ())
6703 {
6704 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6705 for (unsigned ix = 0; ix != num; ix++)
6706 BINFO_BASE_APPEND (t, tree_node ());
6707 }
6708 break;
6709
6710 case TREE_LIST:
6711 RT (t->list.purpose);
6712 RT (t->list.value);
6713 RT (t->list.common.chain);
6714 break;
6715
6716 case TREE_VEC:
6717 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6718 RT (TREE_VEC_ELT (t, ix));
6719 RT (t->type_common.common.chain);
6720 break;
6721
6722 /* C++-specific nodes ... */
6723 case BASELINK:
6724 RT (((lang_tree_node *)t)->baselink.binfo);
6725 RTU (((lang_tree_node *)t)->baselink.functions);
6726 RT (((lang_tree_node *)t)->baselink.access_binfo);
6727 break;
6728
6729 case CONSTRAINT_INFO:
6730 RT (((lang_tree_node *)t)->constraint_info.template_reqs);
6731 RT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6732 RT (((lang_tree_node *)t)->constraint_info.associated_constr);
6733 break;
6734
6735 case DEFERRED_NOEXCEPT:
6736 RT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6737 RT (((lang_tree_node *)t)->deferred_noexcept.args);
6738 break;
6739
6740 case LAMBDA_EXPR:
6741 RT (((lang_tree_node *)t)->lambda_expression.capture_list);
6742 RT (((lang_tree_node *)t)->lambda_expression.this_capture);
6743 RT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6744 /* lambda_expression.pending_proxies is NULL */
6745 ((lang_tree_node *)t)->lambda_expression.locus
6746 = state->read_location (*this);
6747 RUC (cp_lambda_default_capture_mode_type,
6748 ((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6749 RU (((lang_tree_node *)t)->lambda_expression.discriminator);
6750 break;
6751
6752 case OVERLOAD:
6753 RT (((lang_tree_node *)t)->overload.function);
6754 RT (t->common.chain);
6755 break;
6756
6757 case PTRMEM_CST:
6758 RT (((lang_tree_node *)t)->ptrmem.member);
6759 break;
6760
6761 case STATIC_ASSERT:
6762 RT (((lang_tree_node *)t)->static_assertion.condition);
6763 RT (((lang_tree_node *)t)->static_assertion.message);
6764 ((lang_tree_node *)t)->static_assertion.location
6765 = state->read_location (*this);
6766 break;
6767
6768 case TEMPLATE_DECL:
6769 /* Streamed when reading the raw template decl itself. */
6770 gcc_assert (((lang_tree_node *)t)->template_decl.arguments);
6771 gcc_assert (((lang_tree_node *)t)->template_decl.result);
6772 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6773 RT (DECL_CHAIN (t));
6774 break;
6775
6776 case TEMPLATE_INFO:
6777 RT (((lang_tree_node *)t)->template_info.tmpl);
6778 RT (((lang_tree_node *)t)->template_info.args);
6779 if (unsigned len = u ())
6780 {
6781 auto &ac = (((lang_tree_node *)t)
6782 ->template_info.deferred_access_checks);
6783 vec_alloc (ac, len);
6784 for (unsigned ix = 0; ix != len; ix++)
6785 {
6786 deferred_access_check m;
6787
6788 RT (m.binfo);
6789 RT (m.decl);
6790 RT (m.diag_decl);
6791 m.loc = state->read_location (*this);
6792 ac->quick_push (m);
6793 }
6794 }
6795 break;
6796
6797 case TEMPLATE_PARM_INDEX:
6798 RU (((lang_tree_node *)t)->tpi.index);
6799 RU (((lang_tree_node *)t)->tpi.level);
6800 RU (((lang_tree_node *)t)->tpi.orig_level);
6801 RT (((lang_tree_node *)t)->tpi.decl);
6802 break;
6803
6804 case TRAIT_EXPR:
6805 RT (((lang_tree_node *)t)->trait_expression.type1);
6806 RT (((lang_tree_node *)t)->trait_expression.type2);
6807 RUC (cp_trait_kind, ((lang_tree_node *)t)->trait_expression.kind);
6808 break;
6809 }
6810
6811 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6812 {
6813 tree type = tree_node ();
6814
6815 if (type && code == ENUMERAL_TYPE && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6816 {
6817 unsigned precision = u ();
6818
6819 type = build_distinct_type_copy (type);
6820 TYPE_PRECISION (type) = precision;
6821 set_min_and_max_values_for_integral_type (type, precision,
6822 TYPE_SIGN (type));
6823 }
6824
6825 if (code != TEMPLATE_DECL)
6826 t->typed.type = type;
6827 }
6828
6829 #undef RT
6830 #undef RM
6831 #undef RU
6832 return !get_overrun ();
6833 }
6834
6835 void
6836 trees_out::lang_decl_vals (tree t)
6837 {
6838 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
6839 #define WU(X) (u (X))
6840 #define WT(X) (tree_node (X))
6841 /* Module index already written. */
6842 switch (lang->u.base.selector)
6843 {
6844 default:
6845 gcc_unreachable ();
6846
6847 case lds_fn: /* lang_decl_fn. */
6848 if (streaming_p ())
6849 {
6850 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
6851 WU (lang->u.fn.ovl_op_code);
6852 }
6853
6854 if (DECL_CLASS_SCOPE_P (t))
6855 WT (lang->u.fn.context);
6856
6857 if (lang->u.fn.thunk_p)
6858 {
6859 /* The thunked-to function. */
6860 WT (lang->u.fn.befriending_classes);
6861 if (streaming_p ())
6862 wi (lang->u.fn.u5.fixed_offset);
6863 }
6864 else
6865 WT (lang->u.fn.u5.cloned_function);
6866
6867 if (FNDECL_USED_AUTO (t))
6868 WT (lang->u.fn.u.saved_auto_return_type);
6869
6870 goto lds_min;
6871
6872 case lds_decomp: /* lang_decl_decomp. */
6873 WT (lang->u.decomp.base);
6874 goto lds_min;
6875
6876 case lds_min: /* lang_decl_min. */
6877 lds_min:
6878 WT (lang->u.min.template_info);
6879 {
6880 tree access = lang->u.min.access;
6881
6882 /* DECL_ACCESS needs to be maintained by the definition of the
6883 (derived) class that changes the access. The other users
6884 of DECL_ACCESS need to write it here. */
6885 if (!DECL_THUNK_P (t)
6886 && (DECL_CONTEXT (t) && TYPE_P (DECL_CONTEXT (t))))
6887 access = NULL_TREE;
6888
6889 WT (access);
6890 }
6891 break;
6892
6893 case lds_ns: /* lang_decl_ns. */
6894 break;
6895
6896 case lds_parm: /* lang_decl_parm. */
6897 if (streaming_p ())
6898 {
6899 WU (lang->u.parm.level);
6900 WU (lang->u.parm.index);
6901 }
6902 break;
6903 }
6904 #undef WU
6905 #undef WT
6906 }
6907
6908 bool
6909 trees_in::lang_decl_vals (tree t)
6910 {
6911 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
6912 #define RU(X) ((X) = u ())
6913 #define RT(X) ((X) = tree_node ())
6914
6915 /* Module index already read. */
6916 switch (lang->u.base.selector)
6917 {
6918 default:
6919 gcc_unreachable ();
6920
6921 case lds_fn: /* lang_decl_fn. */
6922 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
6923 {
6924 unsigned code = u ();
6925
6926 /* Check consistency. */
6927 if (code >= OVL_OP_MAX
6928 || (ovl_op_info[IDENTIFIER_ASSIGN_OP_P (DECL_NAME (t))][code]
6929 .ovl_op_code) == OVL_OP_ERROR_MARK)
6930 set_overrun ();
6931 else
6932 lang->u.fn.ovl_op_code = code;
6933 }
6934
6935 if (DECL_CLASS_SCOPE_P (t))
6936 RT (lang->u.fn.context);
6937
6938 if (lang->u.fn.thunk_p)
6939 {
6940 RT (lang->u.fn.befriending_classes);
6941 lang->u.fn.u5.fixed_offset = wi ();
6942 }
6943 else
6944 RT (lang->u.fn.u5.cloned_function);
6945
6946 if (FNDECL_USED_AUTO (t))
6947 RT (lang->u.fn.u.saved_auto_return_type);
6948 goto lds_min;
6949
6950 case lds_decomp: /* lang_decl_decomp. */
6951 RT (lang->u.decomp.base);
6952 goto lds_min;
6953
6954 case lds_min: /* lang_decl_min. */
6955 lds_min:
6956 RT (lang->u.min.template_info);
6957 RT (lang->u.min.access);
6958 break;
6959
6960 case lds_ns: /* lang_decl_ns. */
6961 break;
6962
6963 case lds_parm: /* lang_decl_parm. */
6964 RU (lang->u.parm.level);
6965 RU (lang->u.parm.index);
6966 break;
6967 }
6968 #undef RU
6969 #undef RT
6970 return !get_overrun ();
6971 }
6972
6973 /* Most of the value contents of lang_type is streamed in
6974 define_class. */
6975
6976 void
6977 trees_out::lang_type_vals (tree t)
6978 {
6979 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
6980 #define WU(X) (u (X))
6981 #define WT(X) (tree_node (X))
6982 if (streaming_p ())
6983 WU (lang->align);
6984 #undef WU
6985 #undef WT
6986 }
6987
6988 bool
6989 trees_in::lang_type_vals (tree t)
6990 {
6991 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
6992 #define RU(X) ((X) = u ())
6993 #define RT(X) ((X) = tree_node ())
6994 RU (lang->align);
6995 #undef RU
6996 #undef RT
6997 return !get_overrun ();
6998 }
6999
7000 /* Write out the bools of T, including information about any
7001 LANG_SPECIFIC information. Including allocation of any lang
7002 specific object. */
7003
7004 void
7005 trees_out::tree_node_bools (tree t)
7006 {
7007 gcc_checking_assert (streaming_p ());
7008
7009 /* We should never stream a namespace. */
7010 gcc_checking_assert (TREE_CODE (t) != NAMESPACE_DECL
7011 || DECL_NAMESPACE_ALIAS (t));
7012
7013 core_bools (t);
7014
7015 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7016 {
7017 case tcc_declaration:
7018 {
7019 bool specific = DECL_LANG_SPECIFIC (t) != NULL;
7020 b (specific);
7021 if (specific && VAR_P (t))
7022 b (DECL_DECOMPOSITION_P (t));
7023 if (specific)
7024 lang_decl_bools (t);
7025 }
7026 break;
7027
7028 case tcc_type:
7029 {
7030 bool specific = (TYPE_MAIN_VARIANT (t) == t
7031 && TYPE_LANG_SPECIFIC (t) != NULL);
7032 gcc_assert (TYPE_LANG_SPECIFIC (t)
7033 == TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t)));
7034
7035 b (specific);
7036 if (specific)
7037 lang_type_bools (t);
7038 }
7039 break;
7040
7041 default:
7042 break;
7043 }
7044
7045 bflush ();
7046 }
7047
7048 bool
7049 trees_in::tree_node_bools (tree t)
7050 {
7051 bool ok = core_bools (t);
7052
7053 if (ok)
7054 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7055 {
7056 case tcc_declaration:
7057 if (b ())
7058 {
7059 bool decomp = VAR_P (t) && b ();
7060
7061 ok = maybe_add_lang_decl_raw (t, decomp);
7062 if (ok)
7063 ok = lang_decl_bools (t);
7064 }
7065 break;
7066
7067 case tcc_type:
7068 if (b ())
7069 {
7070 ok = maybe_add_lang_type_raw (t);
7071 if (ok)
7072 ok = lang_type_bools (t);
7073 }
7074 break;
7075
7076 default:
7077 break;
7078 }
7079
7080 bflush ();
7081 if (!ok || get_overrun ())
7082 return false;
7083
7084 return true;
7085 }
7086
7087
7088 /* Write out the lang-specifc vals of node T. */
7089
7090 void
7091 trees_out::lang_vals (tree t)
7092 {
7093 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7094 {
7095 case tcc_declaration:
7096 if (DECL_LANG_SPECIFIC (t))
7097 lang_decl_vals (t);
7098 break;
7099
7100 case tcc_type:
7101 if (TYPE_MAIN_VARIANT (t) == t && TYPE_LANG_SPECIFIC (t))
7102 lang_type_vals (t);
7103 break;
7104
7105 default:
7106 break;
7107 }
7108 }
7109
7110 bool
7111 trees_in::lang_vals (tree t)
7112 {
7113 bool ok = true;
7114
7115 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7116 {
7117 case tcc_declaration:
7118 if (DECL_LANG_SPECIFIC (t))
7119 ok = lang_decl_vals (t);
7120 break;
7121
7122 case tcc_type:
7123 if (TYPE_LANG_SPECIFIC (t))
7124 ok = lang_type_vals (t);
7125 else
7126 TYPE_LANG_SPECIFIC (t) = TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t));
7127 break;
7128
7129 default:
7130 break;
7131 }
7132
7133 return ok;
7134 }
7135
7136 /* Write out the value fields of node T. */
7137
7138 void
7139 trees_out::tree_node_vals (tree t)
7140 {
7141 core_vals (t);
7142 lang_vals (t);
7143 }
7144
7145 bool
7146 trees_in::tree_node_vals (tree t)
7147 {
7148 bool ok = core_vals (t);
7149 if (ok)
7150 ok = lang_vals (t);
7151
7152 return ok;
7153 }
7154
7155
7156 /* If T is a back reference, fixed reference or NULL, write out it's
7157 code and return WK_none. Otherwise return WK_value if we must write
7158 by value, or WK_normal otherwise. */
7159
7160 walk_kind
7161 trees_out::ref_node (tree t)
7162 {
7163 if (!t)
7164 {
7165 if (streaming_p ())
7166 {
7167 /* NULL_TREE -> tt_null. */
7168 null_count++;
7169 i (tt_null);
7170 }
7171 return WK_none;
7172 }
7173
7174 if (!TREE_VISITED (t))
7175 return WK_normal;
7176
7177 /* An already-visited tree. It must be in the map. */
7178 int val = get_tag (t);
7179
7180 if (val == tag_value)
7181 /* An entry we should walk into. */
7182 return WK_value;
7183
7184 const char *kind;
7185
7186 if (val <= tag_backref)
7187 {
7188 /* Back reference -> -ve number */
7189 if (streaming_p ())
7190 i (val);
7191 kind = "backref";
7192 }
7193 else if (val >= tag_fixed)
7194 {
7195 /* Fixed reference -> tt_fixed */
7196 val -= tag_fixed;
7197 if (streaming_p ())
7198 i (tt_fixed), u (val);
7199 kind = "fixed";
7200 }
7201
7202 if (streaming_p ())
7203 {
7204 back_ref_count++;
7205 dump (dumper::TREE)
7206 && dump ("Wrote %s:%d %C:%N%S", kind, val, TREE_CODE (t), t, t);
7207 }
7208 return WK_none;
7209 }
7210
7211 tree
7212 trees_in::back_ref (int tag)
7213 {
7214 tree res = NULL_TREE;
7215
7216 if (tag < 0 && unsigned (~tag) < back_refs.length ())
7217 res = back_refs[~tag];
7218
7219 if (!res
7220 /* Checking TREE_CODE is a dereference, so we know this is not a
7221 wild pointer. Checking the code provides evidence we've not
7222 corrupted something. */
7223 || TREE_CODE (res) >= MAX_TREE_CODES)
7224 set_overrun ();
7225 else
7226 dump (dumper::TREE) && dump ("Read backref:%d found %C:%N%S", tag,
7227 TREE_CODE (res), res, res);
7228 return res;
7229 }
7230
7231 unsigned
7232 trees_out::add_indirect_tpl_parms (tree parms)
7233 {
7234 unsigned len = 0;
7235 for (; parms; parms = TREE_CHAIN (parms), len++)
7236 {
7237 if (TREE_VISITED (parms))
7238 break;
7239
7240 int tag = insert (parms);
7241 if (streaming_p ())
7242 dump (dumper::TREE)
7243 && dump ("Indirect:%d template's parameter %u %C:%N",
7244 tag, len, TREE_CODE (parms), parms);
7245 }
7246
7247 if (streaming_p ())
7248 u (len);
7249
7250 return len;
7251 }
7252
7253 unsigned
7254 trees_in::add_indirect_tpl_parms (tree parms)
7255 {
7256 unsigned len = u ();
7257 for (unsigned ix = 0; ix != len; parms = TREE_CHAIN (parms), ix++)
7258 {
7259 int tag = insert (parms);
7260 dump (dumper::TREE)
7261 && dump ("Indirect:%d template's parameter %u %C:%N",
7262 tag, ix, TREE_CODE (parms), parms);
7263 }
7264
7265 return len;
7266 }
7267
7268 /* We've just found DECL by name. Insert nodes that come with it, but
7269 cannot be found by name, so we'll not accidentally walk into them. */
7270
7271 void
7272 trees_out::add_indirects (tree decl)
7273 {
7274 unsigned count = 0;
7275
7276 // FIXME:OPTIMIZATION We'll eventually want default fn parms of
7277 // templates and perhaps default template parms too. The former can
7278 // be referenced from instantiations (as they are lazily
7279 // instantiated). Also (deferred?) exception specifications of
7280 // templates. See the note about PARM_DECLs in trees_out::decl_node.
7281 tree inner = decl;
7282 if (TREE_CODE (decl) == TEMPLATE_DECL)
7283 {
7284 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7285
7286 inner = DECL_TEMPLATE_RESULT (decl);
7287 int tag = insert (inner);
7288 if (streaming_p ())
7289 dump (dumper::TREE)
7290 && dump ("Indirect:%d template's result %C:%N",
7291 tag, TREE_CODE (inner), inner);
7292 count++;
7293 }
7294
7295 if (TREE_CODE (inner) == TYPE_DECL)
7296 {
7297 /* Make sure the type is in the map too. Otherwise we get
7298 different RECORD_TYPEs for the same type, and things go
7299 south. */
7300 tree type = TREE_TYPE (inner);
7301 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7302 || TYPE_NAME (type) == inner);
7303 int tag = insert (type);
7304 if (streaming_p ())
7305 dump (dumper::TREE) && dump ("Indirect:%d decl's type %C:%N", tag,
7306 TREE_CODE (type), type);
7307 count++;
7308 }
7309
7310 if (streaming_p ())
7311 {
7312 u (count);
7313 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7314 }
7315 }
7316
7317 bool
7318 trees_in::add_indirects (tree decl)
7319 {
7320 unsigned count = 0;
7321
7322 tree inner = decl;
7323 if (TREE_CODE (inner) == TEMPLATE_DECL)
7324 {
7325 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7326
7327 inner = DECL_TEMPLATE_RESULT (decl);
7328 int tag = insert (inner);
7329 dump (dumper::TREE)
7330 && dump ("Indirect:%d templates's result %C:%N", tag,
7331 TREE_CODE (inner), inner);
7332 count++;
7333 }
7334
7335 if (TREE_CODE (inner) == TYPE_DECL)
7336 {
7337 tree type = TREE_TYPE (inner);
7338 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7339 || TYPE_NAME (type) == inner);
7340 int tag = insert (type);
7341 dump (dumper::TREE)
7342 && dump ("Indirect:%d decl's type %C:%N", tag, TREE_CODE (type), type);
7343 count++;
7344 }
7345
7346 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7347 return count == u ();
7348 }
7349
7350 /* Stream a template parameter. There are 4.5 kinds of parameter:
7351 a) Template - TEMPLATE_DECL->TYPE_DECL->TEMPLATE_TEMPLATE_PARM
7352 TEMPLATE_TYPE_PARM_INDEX TPI
7353 b) Type - TYPE_DECL->TEMPLATE_TYPE_PARM TEMPLATE_TYPE_PARM_INDEX TPI
7354 c.1) NonTYPE - PARM_DECL DECL_INITIAL TPI We meet this first
7355 c.2) NonTYPE - CONST_DECL DECL_INITIAL Same TPI
7356 d) BoundTemplate - TYPE_DECL->BOUND_TEMPLATE_TEMPLATE_PARM
7357 TEMPLATE_TYPE_PARM_INDEX->TPI
7358 TEMPLATE_TEMPLATE_PARM_INFO->TEMPLATE_INFO
7359
7360 All of these point to a TEMPLATE_PARM_INDEX, and #B also has a TEMPLATE_INFO
7361 */
7362
7363 void
7364 trees_out::tpl_parm_value (tree parm)
7365 {
7366 gcc_checking_assert (DECL_P (parm) && DECL_TEMPLATE_PARM_P (parm));
7367
7368 int parm_tag = insert (parm);
7369 if (streaming_p ())
7370 {
7371 i (tt_tpl_parm);
7372 dump (dumper::TREE) && dump ("Writing template parm:%d %C:%N",
7373 parm_tag, TREE_CODE (parm), parm);
7374 start (parm);
7375 tree_node_bools (parm);
7376 }
7377
7378 tree inner = parm;
7379 if (TREE_CODE (inner) == TEMPLATE_DECL)
7380 {
7381 inner = DECL_TEMPLATE_RESULT (inner);
7382 int inner_tag = insert (inner);
7383 if (streaming_p ())
7384 {
7385 dump (dumper::TREE) && dump ("Writing inner template parm:%d %C:%N",
7386 inner_tag, TREE_CODE (inner), inner);
7387 start (inner);
7388 tree_node_bools (inner);
7389 }
7390 }
7391
7392 tree type = NULL_TREE;
7393 if (TREE_CODE (inner) == TYPE_DECL)
7394 {
7395 type = TREE_TYPE (inner);
7396 int type_tag = insert (type);
7397 if (streaming_p ())
7398 {
7399 dump (dumper::TREE) && dump ("Writing template parm type:%d %C:%N",
7400 type_tag, TREE_CODE (type), type);
7401 start (type);
7402 tree_node_bools (type);
7403 }
7404 }
7405
7406 if (inner != parm)
7407 {
7408 /* This is a template-template parameter. */
7409 unsigned tpl_levels = 0;
7410 tpl_header (parm, &tpl_levels);
7411 tpl_parms_fini (parm, tpl_levels);
7412 }
7413
7414 tree_node_vals (parm);
7415 if (inner != parm)
7416 tree_node_vals (inner);
7417 if (type)
7418 {
7419 tree_node_vals (type);
7420 if (DECL_NAME (inner) == auto_identifier
7421 || DECL_NAME (inner) == decltype_auto_identifier)
7422 {
7423 /* Placeholder auto. */
7424 tree_node (DECL_INITIAL (inner));
7425 tree_node (DECL_SIZE_UNIT (inner));
7426 }
7427 }
7428
7429 if (streaming_p ())
7430 dump (dumper::TREE) && dump ("Wrote template parm:%d %C:%N",
7431 parm_tag, TREE_CODE (parm), parm);
7432 }
7433
7434 tree
7435 trees_in::tpl_parm_value ()
7436 {
7437 tree parm = start ();
7438 if (!parm || !tree_node_bools (parm))
7439 return NULL_TREE;
7440
7441 int parm_tag = insert (parm);
7442 dump (dumper::TREE) && dump ("Reading template parm:%d %C:%N",
7443 parm_tag, TREE_CODE (parm), parm);
7444
7445 tree inner = parm;
7446 if (TREE_CODE (inner) == TEMPLATE_DECL)
7447 {
7448 inner = start ();
7449 if (!inner || !tree_node_bools (inner))
7450 return NULL_TREE;
7451 int inner_tag = insert (inner);
7452 dump (dumper::TREE) && dump ("Reading inner template parm:%d %C:%N",
7453 inner_tag, TREE_CODE (inner), inner);
7454 DECL_TEMPLATE_RESULT (parm) = inner;
7455 }
7456
7457 tree type = NULL_TREE;
7458 if (TREE_CODE (inner) == TYPE_DECL)
7459 {
7460 type = start ();
7461 if (!type || !tree_node_bools (type))
7462 return NULL_TREE;
7463 int type_tag = insert (type);
7464 dump (dumper::TREE) && dump ("Reading template parm type:%d %C:%N",
7465 type_tag, TREE_CODE (type), type);
7466
7467 TREE_TYPE (inner) = TREE_TYPE (parm) = type;
7468 TYPE_NAME (type) = parm;
7469 }
7470
7471 if (inner != parm)
7472 {
7473 /* A template template parameter. */
7474 unsigned tpl_levels = 0;
7475 tpl_header (parm, &tpl_levels);
7476 tpl_parms_fini (parm, tpl_levels);
7477 }
7478
7479 tree_node_vals (parm);
7480 if (inner != parm)
7481 tree_node_vals (inner);
7482 if (type)
7483 {
7484 tree_node_vals (type);
7485 if (DECL_NAME (inner) == auto_identifier
7486 || DECL_NAME (inner) == decltype_auto_identifier)
7487 {
7488 /* Placeholder auto. */
7489 DECL_INITIAL (inner) = tree_node ();
7490 DECL_SIZE_UNIT (inner) = tree_node ();
7491 }
7492 if (TYPE_CANONICAL (type))
7493 {
7494 gcc_checking_assert (TYPE_CANONICAL (type) == type);
7495 TYPE_CANONICAL (type) = canonical_type_parameter (type);
7496 }
7497 }
7498
7499 dump (dumper::TREE) && dump ("Read template parm:%d %C:%N",
7500 parm_tag, TREE_CODE (parm), parm);
7501
7502 return parm;
7503 }
7504
7505 void
7506 trees_out::install_entity (tree decl, depset *dep)
7507 {
7508 gcc_checking_assert (streaming_p ());
7509
7510 /* Write the entity index, so we can insert it as soon as we
7511 know this is new. */
7512 u (dep ? dep->cluster + 1 : 0);
7513 if (CHECKING_P && dep)
7514 {
7515 /* Add it to the entity map, such that we can tell it is
7516 part of us. */
7517 bool existed;
7518 unsigned *slot = &entity_map->get_or_insert
7519 (DECL_UID (decl), &existed);
7520 if (existed)
7521 /* If it existed, it should match. */
7522 gcc_checking_assert (decl == (*entity_ary)[*slot]);
7523 *slot = ~dep->cluster;
7524 }
7525 }
7526
7527 bool
7528 trees_in::install_entity (tree decl)
7529 {
7530 unsigned entity_index = u ();
7531 if (!entity_index)
7532 return false;
7533
7534 if (entity_index > state->entity_num)
7535 {
7536 set_overrun ();
7537 return false;
7538 }
7539
7540 /* Insert the real decl into the entity ary. */
7541 unsigned ident = state->entity_lwm + entity_index - 1;
7542 binding_slot &elt = (*entity_ary)[ident];
7543
7544 /* See module_state::read_pendings for how this got set. */
7545 int pending = elt.get_lazy () & 3;
7546
7547 elt = decl;
7548
7549 /* And into the entity map, if it's not already there. */
7550 if (!DECL_LANG_SPECIFIC (decl)
7551 || !DECL_MODULE_ENTITY_P (decl))
7552 {
7553 retrofit_lang_decl (decl);
7554 DECL_MODULE_ENTITY_P (decl) = true;
7555
7556 /* Insert into the entity hash (it cannot already be there). */
7557 bool existed;
7558 unsigned &slot = entity_map->get_or_insert (DECL_UID (decl), &existed);
7559 gcc_checking_assert (!existed);
7560 slot = ident;
7561 }
7562 else if (pending != 0)
7563 {
7564 unsigned key_ident = import_entity_index (decl);
7565 if (pending & 1)
7566 if (!pending_table->add (key_ident, ~ident))
7567 pending &= ~1;
7568
7569 if (pending & 2)
7570 if (!pending_table->add (~key_ident, ~ident))
7571 pending &= ~2;
7572 }
7573
7574 if (pending & 1)
7575 DECL_MODULE_PENDING_SPECIALIZATIONS_P (decl) = true;
7576
7577 if (pending & 2)
7578 {
7579 DECL_MODULE_PENDING_MEMBERS_P (decl) = true;
7580 gcc_checking_assert (TREE_CODE (decl) != TEMPLATE_DECL);
7581 }
7582
7583 return true;
7584 }
7585
7586 static bool has_definition (tree decl);
7587
7588 /* DECL is a decl node that must be written by value. DEP is the
7589 decl's depset. */
7590
7591 void
7592 trees_out::decl_value (tree decl, depset *dep)
7593 {
7594 /* We should not be writing clones or template parms. */
7595 gcc_checking_assert (DECL_P (decl)
7596 && !DECL_CLONED_FUNCTION_P (decl)
7597 && !DECL_TEMPLATE_PARM_P (decl));
7598
7599 /* We should never be writing non-typedef ptrmemfuncs by value. */
7600 gcc_checking_assert (TREE_CODE (decl) != TYPE_DECL
7601 || DECL_ORIGINAL_TYPE (decl)
7602 || !TYPE_PTRMEMFUNC_P (TREE_TYPE (decl)));
7603
7604 merge_kind mk = get_merge_kind (decl, dep);
7605
7606 if (CHECKING_P)
7607 {
7608 /* Never start in the middle of a template. */
7609 int use_tpl = -1;
7610 if (tree ti = node_template_info (decl, use_tpl))
7611 gcc_checking_assert (TREE_CODE (TI_TEMPLATE (ti)) == OVERLOAD
7612 || TREE_CODE (TI_TEMPLATE (ti)) == FIELD_DECL
7613 || (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti))
7614 != decl));
7615 }
7616
7617 if (streaming_p ())
7618 {
7619 /* A new node -> tt_decl. */
7620 decl_val_count++;
7621 i (tt_decl);
7622 u (mk);
7623 start (decl);
7624
7625 if (mk != MK_unique)
7626 {
7627 if (!(mk & MK_template_mask) && !state->is_header ())
7628 {
7629 /* Tell the importer whether this is a global module entity,
7630 or a module entity. This bool merges into the next block
7631 of bools. Sneaky. */
7632 tree o = get_originating_module_decl (decl);
7633 bool is_mod = false;
7634
7635 if (dep && dep->is_alias_tmpl_inst ())
7636 /* Alias template instantiations are templatey, but
7637 found by name. */
7638 is_mod = false;
7639 else if (DECL_LANG_SPECIFIC (o) && DECL_MODULE_PURVIEW_P (o))
7640 is_mod = true;
7641 b (is_mod);
7642 }
7643 b (dep && dep->has_defn ());
7644 }
7645 tree_node_bools (decl);
7646 }
7647
7648 int tag = insert (decl, WK_value);
7649 if (streaming_p ())
7650 dump (dumper::TREE)
7651 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], tag,
7652 TREE_CODE (decl), decl, decl);
7653
7654 tree inner = decl;
7655 int inner_tag = 0;
7656 if (TREE_CODE (decl) == TEMPLATE_DECL)
7657 {
7658 if (dep && dep->is_alias_tmpl_inst ())
7659 inner = NULL_TREE;
7660 else
7661 {
7662 inner = DECL_TEMPLATE_RESULT (decl);
7663 inner_tag = insert (inner, WK_value);
7664 }
7665
7666 if (streaming_p ())
7667 {
7668 int code = inner ? TREE_CODE (inner) : 0;
7669 u (code);
7670 if (inner)
7671 {
7672 start (inner, true);
7673 tree_node_bools (inner);
7674 dump (dumper::TREE)
7675 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], inner_tag,
7676 TREE_CODE (inner), inner, inner);
7677 }
7678 }
7679 }
7680
7681 tree type = NULL_TREE;
7682 int type_tag = 0;
7683 tree stub_decl = NULL_TREE;
7684 int stub_tag = 0;
7685 if (inner && TREE_CODE (inner) == TYPE_DECL)
7686 {
7687 type = TREE_TYPE (inner);
7688 bool has_type = (type == TYPE_MAIN_VARIANT (type)
7689 && TYPE_NAME (type) == inner);
7690
7691 if (streaming_p ())
7692 u (has_type ? TREE_CODE (type) : 0);
7693
7694 if (has_type)
7695 {
7696 type_tag = insert (type, WK_value);
7697 if (streaming_p ())
7698 {
7699 start (type, true);
7700 tree_node_bools (type);
7701 dump (dumper::TREE)
7702 && dump ("Writing type:%d %C:%N", type_tag,
7703 TREE_CODE (type), type);
7704 }
7705
7706 stub_decl = TYPE_STUB_DECL (type);
7707 bool has_stub = inner != stub_decl;
7708 if (streaming_p ())
7709 u (has_stub ? TREE_CODE (stub_decl) : 0);
7710 if (has_stub)
7711 {
7712 stub_tag = insert (stub_decl);
7713 if (streaming_p ())
7714 {
7715 start (stub_decl, true);
7716 tree_node_bools (stub_decl);
7717 dump (dumper::TREE)
7718 && dump ("Writing stub_decl:%d %C:%N", stub_tag,
7719 TREE_CODE (stub_decl), stub_decl);
7720 }
7721 }
7722 else
7723 stub_decl = NULL_TREE;
7724 }
7725 else
7726 /* Regular typedef. */
7727 type = NULL_TREE;
7728 }
7729
7730 /* Stream the container, we want it correctly canonicalized before
7731 we start emitting keys for this decl. */
7732 tree container = decl_container (decl);
7733
7734 unsigned tpl_levels = 0;
7735 if (decl != inner)
7736 tpl_header (decl, &tpl_levels);
7737 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
7738 fn_parms_init (inner);
7739
7740 /* Now write out the merging information, and then really
7741 install the tag values. */
7742 key_mergeable (tag, mk, decl, inner, container, dep);
7743
7744 if (streaming_p ())
7745 dump (dumper::MERGE)
7746 && dump ("Wrote:%d's %s merge key %C:%N", tag,
7747 merge_kind_name[mk], TREE_CODE (decl), decl);
7748
7749 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
7750 fn_parms_fini (inner);
7751
7752 if (!is_key_order ())
7753 tree_node_vals (decl);
7754
7755 if (inner_tag)
7756 {
7757 if (!is_key_order ())
7758 tree_node_vals (inner);
7759 tpl_parms_fini (decl, tpl_levels);
7760 }
7761 else if (!inner)
7762 {
7763 /* A template alias instantiation. */
7764 inner = DECL_TEMPLATE_RESULT (decl);
7765 if (!is_key_order ())
7766 tree_node (inner);
7767 if (streaming_p ())
7768 dump (dumper::TREE)
7769 && dump ("Wrote(%d) alias template %C:%N",
7770 get_tag (inner), TREE_CODE (inner), inner);
7771 inner = NULL_TREE;
7772 }
7773
7774 if (type && !is_key_order ())
7775 {
7776 tree_node_vals (type);
7777 if (stub_decl)
7778 tree_node_vals (stub_decl);
7779 }
7780
7781 if (!is_key_order ())
7782 tree_node (get_constraints (decl));
7783
7784 if (streaming_p ())
7785 {
7786 /* Do not stray outside this section. */
7787 gcc_checking_assert (!dep || dep->section == dep_hash->section);
7788
7789 /* Write the entity index, so we can insert it as soon as we
7790 know this is new. */
7791 install_entity (decl, dep);
7792 }
7793
7794 if (inner
7795 && VAR_OR_FUNCTION_DECL_P (inner)
7796 && DECL_LANG_SPECIFIC (inner)
7797 && DECL_MODULE_ATTACHMENTS_P (inner)
7798 && !is_key_order ())
7799 {
7800 /* Stream the attached entities. */
7801 attachset *set = attached_table->get (DECL_UID (inner));
7802 unsigned num = set->num;
7803 if (streaming_p ())
7804 u (num);
7805 for (unsigned ix = 0; ix != num; ix++)
7806 {
7807 tree attached = set->values[ix];
7808 tree_node (attached);
7809 if (streaming_p ())
7810 dump (dumper::MERGE)
7811 && dump ("Written %d[%u] attached decl %N", tag, ix, attached);
7812 }
7813 }
7814
7815 bool is_typedef = (!type && inner
7816 && TREE_CODE (inner) == TYPE_DECL
7817 && DECL_ORIGINAL_TYPE (inner)
7818 && TYPE_NAME (TREE_TYPE (inner)) == inner);
7819 if (is_typedef)
7820 {
7821 /* A typedef type. */
7822 int type_tag = insert (TREE_TYPE (inner));
7823 if (streaming_p ())
7824 dump (dumper::TREE)
7825 && dump ("Cloned:%d typedef %C:%N", type_tag,
7826 TREE_CODE (TREE_TYPE (inner)), TREE_TYPE (inner));
7827 }
7828
7829 if (streaming_p () && DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
7830 {
7831 bool cloned_p
7832 = (DECL_CHAIN (decl) && DECL_CLONED_FUNCTION_P (DECL_CHAIN (decl)));
7833 bool needs_vtt_parm_p
7834 = (cloned_p && CLASSTYPE_VBASECLASSES (DECL_CONTEXT (decl)));
7835 bool omit_inherited_parms_p
7836 = (cloned_p && DECL_MAYBE_IN_CHARGE_CONSTRUCTOR_P (decl)
7837 && base_ctor_omit_inherited_parms (decl));
7838 unsigned flags = (int (cloned_p) << 0
7839 | int (needs_vtt_parm_p) << 1
7840 | int (omit_inherited_parms_p) << 2);
7841 u (flags);
7842 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
7843 decl, cloned_p ? "" : "not ");
7844 }
7845
7846 if (streaming_p ())
7847 dump (dumper::TREE) && dump ("Written decl:%d %C:%N", tag,
7848 TREE_CODE (decl), decl);
7849
7850 if (!inner || NAMESPACE_SCOPE_P (inner))
7851 gcc_checking_assert (!inner
7852 || !dep == (VAR_OR_FUNCTION_DECL_P (inner)
7853 && DECL_LOCAL_DECL_P (inner)));
7854 else if ((TREE_CODE (inner) == TYPE_DECL
7855 && TYPE_NAME (TREE_TYPE (inner)) == inner
7856 && !is_typedef)
7857 || TREE_CODE (inner) == FUNCTION_DECL)
7858 {
7859 bool write_defn = !dep && has_definition (decl);
7860 if (streaming_p ())
7861 u (write_defn);
7862 if (write_defn)
7863 write_definition (decl);
7864 }
7865 }
7866
7867 tree
7868 trees_in::decl_value ()
7869 {
7870 int tag = 0;
7871 bool is_mod = false;
7872 bool has_defn = false;
7873 unsigned mk_u = u ();
7874 if (mk_u >= MK_hwm || !merge_kind_name[mk_u])
7875 {
7876 set_overrun ();
7877 return NULL_TREE;
7878 }
7879
7880 unsigned saved_unused = unused;
7881 unused = 0;
7882
7883 merge_kind mk = merge_kind (mk_u);
7884
7885 tree decl = start ();
7886 if (decl)
7887 {
7888 if (mk != MK_unique)
7889 {
7890 if (!(mk & MK_template_mask) && !state->is_header ())
7891 /* See note in trees_out about where this bool is sequenced. */
7892 is_mod = b ();
7893
7894 has_defn = b ();
7895 }
7896
7897 if (!tree_node_bools (decl))
7898 decl = NULL_TREE;
7899 }
7900
7901 /* Insert into map. */
7902 tag = insert (decl);
7903 if (decl)
7904 dump (dumper::TREE)
7905 && dump ("Reading:%d %C", tag, TREE_CODE (decl));
7906
7907 tree inner = decl;
7908 int inner_tag = 0;
7909 if (decl && TREE_CODE (decl) == TEMPLATE_DECL)
7910 {
7911 int code = u ();
7912 if (!code)
7913 {
7914 inner = NULL_TREE;
7915 DECL_TEMPLATE_RESULT (decl) = error_mark_node;
7916 }
7917 else
7918 {
7919 inner = start (code);
7920 if (inner && tree_node_bools (inner))
7921 DECL_TEMPLATE_RESULT (decl) = inner;
7922 else
7923 decl = NULL_TREE;
7924
7925 inner_tag = insert (inner);
7926 if (decl)
7927 dump (dumper::TREE)
7928 && dump ("Reading:%d %C", inner_tag, TREE_CODE (inner));
7929 }
7930 }
7931
7932 tree type = NULL_TREE;
7933 int type_tag = 0;
7934 tree stub_decl = NULL_TREE;
7935 int stub_tag = 0;
7936 if (decl && inner && TREE_CODE (inner) == TYPE_DECL)
7937 {
7938 if (unsigned type_code = u ())
7939 {
7940 type = start (type_code);
7941 if (type && tree_node_bools (type))
7942 {
7943 TREE_TYPE (inner) = type;
7944 TYPE_NAME (type) = inner;
7945 }
7946 else
7947 decl = NULL_TREE;
7948
7949 type_tag = insert (type);
7950 if (decl)
7951 dump (dumper::TREE)
7952 && dump ("Reading type:%d %C", type_tag, TREE_CODE (type));
7953
7954 if (unsigned stub_code = u ())
7955 {
7956 stub_decl = start (stub_code);
7957 if (stub_decl && tree_node_bools (stub_decl))
7958 {
7959 TREE_TYPE (stub_decl) = type;
7960 TYPE_STUB_DECL (type) = stub_decl;
7961 }
7962 else
7963 decl = NULL_TREE;
7964
7965 stub_tag = insert (stub_decl);
7966 if (decl)
7967 dump (dumper::TREE)
7968 && dump ("Reading stub_decl:%d %C", stub_tag,
7969 TREE_CODE (stub_decl));
7970 }
7971 }
7972 }
7973
7974 if (!decl)
7975 {
7976 bail:
7977 if (inner_tag != 0)
7978 back_refs[~inner_tag] = NULL_TREE;
7979 if (type_tag != 0)
7980 back_refs[~type_tag] = NULL_TREE;
7981 if (stub_tag != 0)
7982 back_refs[~stub_tag] = NULL_TREE;
7983 if (tag != 0)
7984 back_refs[~tag] = NULL_TREE;
7985 set_overrun ();
7986 /* Bail. */
7987 unused = saved_unused;
7988 return NULL_TREE;
7989 }
7990
7991 /* Read the container, to ensure it's already been streamed in. */
7992 tree container = decl_container ();
7993 unsigned tpl_levels = 0;
7994
7995 /* Figure out if this decl is already known about. */
7996 int parm_tag = 0;
7997
7998 if (decl != inner)
7999 if (!tpl_header (decl, &tpl_levels))
8000 goto bail;
8001 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
8002 parm_tag = fn_parms_init (inner);
8003
8004 tree existing = key_mergeable (tag, mk, decl, inner, type, container, is_mod);
8005 tree existing_inner = existing;
8006 if (existing)
8007 {
8008 if (existing == error_mark_node)
8009 goto bail;
8010
8011 if (TREE_CODE (STRIP_TEMPLATE (existing)) == TYPE_DECL)
8012 {
8013 tree etype = TREE_TYPE (existing);
8014 if (TYPE_LANG_SPECIFIC (etype)
8015 && COMPLETE_TYPE_P (etype)
8016 && !CLASSTYPE_MEMBER_VEC (etype))
8017 /* Give it a member vec, we're likely gonna be looking
8018 inside it. */
8019 set_class_bindings (etype, -1);
8020 }
8021
8022 /* Install the existing decl into the back ref array. */
8023 register_duplicate (decl, existing);
8024 back_refs[~tag] = existing;
8025 if (inner_tag != 0)
8026 {
8027 existing_inner = DECL_TEMPLATE_RESULT (existing);
8028 back_refs[~inner_tag] = existing_inner;
8029 }
8030
8031 if (type_tag != 0)
8032 {
8033 tree existing_type = TREE_TYPE (existing);
8034 back_refs[~type_tag] = existing_type;
8035 if (stub_tag != 0)
8036 back_refs[~stub_tag] = TYPE_STUB_DECL (existing_type);
8037 }
8038 }
8039
8040 if (parm_tag)
8041 fn_parms_fini (parm_tag, inner, existing_inner, has_defn);
8042
8043 if (!tree_node_vals (decl))
8044 goto bail;
8045
8046 if (inner_tag)
8047 {
8048 gcc_checking_assert (DECL_TEMPLATE_RESULT (decl) == inner);
8049
8050 if (!tree_node_vals (inner))
8051 goto bail;
8052
8053 if (!tpl_parms_fini (decl, tpl_levels))
8054 goto bail;
8055 }
8056 else if (!inner)
8057 {
8058 inner = tree_node ();
8059 DECL_TEMPLATE_RESULT (decl) = inner;
8060 TREE_TYPE (decl) = TREE_TYPE (inner);
8061 dump (dumper::TREE)
8062 && dump ("Read alias template %C:%N", TREE_CODE (inner), inner);
8063 inner = NULL_TREE;
8064 }
8065
8066 if (type && (!tree_node_vals (type)
8067 || (stub_decl && !tree_node_vals (stub_decl))))
8068 goto bail;
8069
8070 tree constraints = tree_node ();
8071
8072 dump (dumper::TREE) && dump ("Read:%d %C:%N", tag, TREE_CODE (decl), decl);
8073
8074 /* Regular typedefs will have a NULL TREE_TYPE at this point. */
8075 bool is_typedef = (!type && inner
8076 && TREE_CODE (inner) == TYPE_DECL
8077 && DECL_ORIGINAL_TYPE (inner)
8078 && !TREE_TYPE (inner));
8079 if (is_typedef)
8080 {
8081 /* Frob it to be ready for cloning. */
8082 TREE_TYPE (inner) = DECL_ORIGINAL_TYPE (inner);
8083 DECL_ORIGINAL_TYPE (inner) = NULL_TREE;
8084 }
8085
8086 existing = back_refs[~tag];
8087 bool installed = install_entity (existing);
8088 bool is_new = existing == decl;
8089
8090 if (inner
8091 && VAR_OR_FUNCTION_DECL_P (inner)
8092 && DECL_LANG_SPECIFIC (inner)
8093 && DECL_MODULE_ATTACHMENTS_P (inner))
8094 {
8095 /* Read and maybe install the attached entities. */
8096 attachset *set
8097 = attached_table->get (DECL_UID (STRIP_TEMPLATE (existing)));
8098 unsigned num = u ();
8099 if (!is_new == !set)
8100 set_overrun ();
8101 if (is_new)
8102 set = attached_table->create (DECL_UID (inner), num, NULL_TREE);
8103 for (unsigned ix = 0; !get_overrun () && ix != num; ix++)
8104 {
8105 tree attached = tree_node ();
8106 dump (dumper::MERGE)
8107 && dump ("Read %d[%u] %s attached decl %N", tag, ix,
8108 is_new ? "new" : "matched", attached);
8109 if (is_new)
8110 set->values[ix] = attached;
8111 else if (set->values[ix] != attached)
8112 set_overrun ();
8113 }
8114 }
8115
8116 if (is_new)
8117 {
8118 /* A newly discovered node. */
8119 if (TREE_CODE (decl) == FUNCTION_DECL && DECL_VIRTUAL_P (decl))
8120 /* Mark this identifier as naming a virtual function --
8121 lookup_overrides relies on this optimization. */
8122 IDENTIFIER_VIRTUAL_P (DECL_NAME (decl)) = true;
8123
8124 if (installed)
8125 {
8126 /* Mark the entity as imported and add it to the entity
8127 array and map. */
8128 retrofit_lang_decl (decl);
8129 DECL_MODULE_IMPORT_P (decl) = true;
8130 if (inner_tag)
8131 {
8132 retrofit_lang_decl (inner);
8133 DECL_MODULE_IMPORT_P (inner) = true;
8134 }
8135 }
8136
8137 if (constraints)
8138 set_constraints (decl, constraints);
8139
8140 if (TREE_CODE (decl) == INTEGER_CST && !TREE_OVERFLOW (decl))
8141 {
8142 decl = cache_integer_cst (decl, true);
8143 back_refs[~tag] = decl;
8144 }
8145
8146 if (is_typedef)
8147 set_underlying_type (inner);
8148
8149 if (inner_tag)
8150 /* Set the TEMPLATE_DECL's type. */
8151 TREE_TYPE (decl) = TREE_TYPE (inner);
8152
8153 /* The late insertion of an alias here or an implicit member
8154 (next block), is ok, because we ensured that all imports were
8155 loaded up before we started this cluster. Thus an insertion
8156 from some other import cannot have happened between the
8157 merged insertion above and these insertions down here. */
8158 if (mk == MK_alias_spec)
8159 {
8160 /* Insert into type table. */
8161 tree ti = DECL_TEMPLATE_INFO (inner);
8162 spec_entry elt =
8163 {TI_TEMPLATE (ti), TI_ARGS (ti), TREE_TYPE (inner)};
8164 tree texist = match_mergeable_specialization (false, &elt);
8165 if (texist)
8166 set_overrun ();
8167 }
8168
8169 if (DECL_ARTIFICIAL (decl)
8170 && TREE_CODE (decl) == FUNCTION_DECL
8171 && !DECL_TEMPLATE_INFO (decl)
8172 && DECL_CONTEXT (decl) && TYPE_P (DECL_CONTEXT (decl))
8173 && TYPE_SIZE (DECL_CONTEXT (decl))
8174 && !DECL_THUNK_P (decl))
8175 /* A new implicit member function, when the class is
8176 complete. This means the importee declared it, and
8177 we must now add it to the class. Note that implicit
8178 member fns of template instantiations do not themselves
8179 look like templates. */
8180 if (!install_implicit_member (inner))
8181 set_overrun ();
8182 }
8183 else
8184 {
8185 /* DECL is the to-be-discarded decl. Its internal pointers will
8186 be to the EXISTING's structure. Frob it to point to its
8187 own other structures, so loading its definition will alter
8188 it, and not the existing decl. */
8189 dump (dumper::MERGE) && dump ("Deduping %N", existing);
8190
8191 if (inner_tag)
8192 DECL_TEMPLATE_RESULT (decl) = inner;
8193
8194 if (type)
8195 {
8196 /* Point at the to-be-discarded type & decl. */
8197 TYPE_NAME (type) = inner;
8198 TREE_TYPE (inner) = type;
8199
8200 TYPE_STUB_DECL (type) = stub_decl ? stub_decl : inner;
8201 if (stub_decl)
8202 TREE_TYPE (stub_decl) = type;
8203 }
8204
8205 if (inner_tag)
8206 /* Set the TEMPLATE_DECL's type. */
8207 TREE_TYPE (decl) = TREE_TYPE (inner);
8208
8209 if (!is_matching_decl (existing, decl))
8210 unmatched_duplicate (existing);
8211
8212 /* And our result is the existing node. */
8213 decl = existing;
8214 }
8215
8216 if (is_typedef)
8217 {
8218 /* Insert the type into the array now. */
8219 tag = insert (TREE_TYPE (decl));
8220 dump (dumper::TREE)
8221 && dump ("Cloned:%d typedef %C:%N",
8222 tag, TREE_CODE (TREE_TYPE (decl)), TREE_TYPE (decl));
8223 }
8224
8225 unused = saved_unused;
8226
8227 if (DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
8228 {
8229 unsigned flags = u ();
8230
8231 if (is_new)
8232 {
8233 bool cloned_p = flags & 1;
8234 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
8235 decl, cloned_p ? "" : "not ");
8236 if (cloned_p)
8237 build_cdtor_clones (decl, flags & 2, flags & 4,
8238 /* Update the member vec, if there is
8239 one (we're in a different cluster
8240 to the class defn). */
8241 CLASSTYPE_MEMBER_VEC (DECL_CONTEXT (decl)));
8242 }
8243 }
8244
8245 if (inner
8246 && !NAMESPACE_SCOPE_P (inner)
8247 && ((TREE_CODE (inner) == TYPE_DECL
8248 && TYPE_NAME (TREE_TYPE (inner)) == inner
8249 && !is_typedef)
8250 || TREE_CODE (inner) == FUNCTION_DECL)
8251 && u ())
8252 read_definition (decl);
8253
8254 return decl;
8255 }
8256
8257 /* DECL is an unnameable member of CTX. Return a suitable identifying
8258 index. */
8259
8260 static unsigned
8261 get_field_ident (tree ctx, tree decl)
8262 {
8263 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
8264 || !DECL_NAME (decl)
8265 || IDENTIFIER_ANON_P (DECL_NAME (decl)));
8266
8267 unsigned ix = 0;
8268 for (tree fields = TYPE_FIELDS (ctx);
8269 fields; fields = DECL_CHAIN (fields))
8270 {
8271 if (fields == decl)
8272 return ix;
8273
8274 if (DECL_CONTEXT (fields) == ctx
8275 && (TREE_CODE (fields) == USING_DECL
8276 || (TREE_CODE (fields) == FIELD_DECL
8277 && (!DECL_NAME (fields)
8278 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8279 /* Count this field. */
8280 ix++;
8281 }
8282 gcc_unreachable ();
8283 }
8284
8285 static tree
8286 lookup_field_ident (tree ctx, unsigned ix)
8287 {
8288 for (tree fields = TYPE_FIELDS (ctx);
8289 fields; fields = DECL_CHAIN (fields))
8290 if (DECL_CONTEXT (fields) == ctx
8291 && (TREE_CODE (fields) == USING_DECL
8292 || (TREE_CODE (fields) == FIELD_DECL
8293 && (!DECL_NAME (fields)
8294 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8295 if (!ix--)
8296 return fields;
8297
8298 return NULL_TREE;
8299 }
8300
8301 /* Reference DECL. REF indicates the walk kind we are performing.
8302 Return true if we should write this decl by value. */
8303
8304 bool
8305 trees_out::decl_node (tree decl, walk_kind ref)
8306 {
8307 gcc_checking_assert (DECL_P (decl) && !DECL_TEMPLATE_PARM_P (decl)
8308 && DECL_CONTEXT (decl));
8309
8310 if (ref == WK_value)
8311 {
8312 depset *dep = dep_hash->find_dependency (decl);
8313 decl_value (decl, dep);
8314 return false;
8315 }
8316
8317 switch (TREE_CODE (decl))
8318 {
8319 default:
8320 break;
8321
8322 case FUNCTION_DECL:
8323 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8324 break;
8325
8326 case RESULT_DECL:
8327 /* Unlike PARM_DECLs, RESULT_DECLs are only generated and
8328 referenced when we're inside the function itself. */
8329 return true;
8330
8331 case PARM_DECL:
8332 {
8333 if (streaming_p ())
8334 i (tt_parm);
8335 tree_node (DECL_CONTEXT (decl));
8336 if (streaming_p ())
8337 {
8338 /* That must have put this in the map. */
8339 walk_kind ref = ref_node (decl);
8340 if (ref != WK_none)
8341 // FIXME:OPTIMIZATION We can wander into bits of the
8342 // template this was instantiated from. For instance
8343 // deferred noexcept and default parms. Currently we'll
8344 // end up cloning those bits of tree. It would be nice
8345 // to reference those specific nodes. I think putting
8346 // those things in the map when we reference their
8347 // template by name. See the note in add_indirects.
8348 return true;
8349
8350 dump (dumper::TREE)
8351 && dump ("Wrote %s reference %N",
8352 TREE_CODE (decl) == PARM_DECL ? "parameter" : "result",
8353 decl);
8354 }
8355 }
8356 return false;
8357
8358 case IMPORTED_DECL:
8359 /* This describes a USING_DECL to the ME's debug machinery. It
8360 originates from the fortran FE, and has nothing to do with
8361 C++ modules. */
8362 return true;
8363
8364 case LABEL_DECL:
8365 return true;
8366
8367 case CONST_DECL:
8368 {
8369 /* If I end up cloning enum decls, implementing C++20 using
8370 E::v, this will need tweaking. */
8371 if (streaming_p ())
8372 i (tt_enum_decl);
8373 tree ctx = DECL_CONTEXT (decl);
8374 gcc_checking_assert (TREE_CODE (ctx) == ENUMERAL_TYPE);
8375 tree_node (ctx);
8376 tree_node (DECL_NAME (decl));
8377
8378 int tag = insert (decl);
8379 if (streaming_p ())
8380 dump (dumper::TREE)
8381 && dump ("Wrote enum decl:%d %C:%N", tag, TREE_CODE (decl), decl);
8382 return false;
8383 }
8384 break;
8385
8386 case USING_DECL:
8387 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
8388 break;
8389 /* FALLTHROUGH */
8390
8391 case FIELD_DECL:
8392 {
8393 if (streaming_p ())
8394 i (tt_data_member);
8395
8396 tree ctx = DECL_CONTEXT (decl);
8397 tree_node (ctx);
8398
8399 tree name = NULL_TREE;
8400
8401 if (TREE_CODE (decl) == USING_DECL)
8402 ;
8403 else
8404 {
8405 name = DECL_NAME (decl);
8406 if (name && IDENTIFIER_ANON_P (name))
8407 name = NULL_TREE;
8408 }
8409
8410 tree_node (name);
8411 if (!name && streaming_p ())
8412 {
8413 unsigned ix = get_field_ident (ctx, decl);
8414 u (ix);
8415 }
8416
8417 int tag = insert (decl);
8418 if (streaming_p ())
8419 dump (dumper::TREE)
8420 && dump ("Wrote member:%d %C:%N", tag, TREE_CODE (decl), decl);
8421 return false;
8422 }
8423 break;
8424
8425 case VAR_DECL:
8426 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8427 if (DECL_VTABLE_OR_VTT_P (decl))
8428 {
8429 /* VTT or VTABLE, they are all on the vtables list. */
8430 tree ctx = CP_DECL_CONTEXT (decl);
8431 tree vtable = CLASSTYPE_VTABLES (ctx);
8432 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
8433 if (vtable == decl)
8434 {
8435 gcc_checking_assert (DECL_VIRTUAL_P (decl));
8436 if (streaming_p ())
8437 {
8438 u (tt_vtable);
8439 u (ix);
8440 dump (dumper::TREE)
8441 && dump ("Writing vtable %N[%u]", ctx, ix);
8442 }
8443 tree_node (ctx);
8444 return false;
8445 }
8446 gcc_unreachable ();
8447 }
8448
8449 if (DECL_TINFO_P (decl))
8450 {
8451 tinfo:
8452 /* A typeinfo, tt_tinfo_typedef or tt_tinfo_var. */
8453 bool is_var = TREE_CODE (decl) == VAR_DECL;
8454 tree type = TREE_TYPE (decl);
8455 unsigned ix = get_pseudo_tinfo_index (type);
8456 if (streaming_p ())
8457 {
8458 i (is_var ? tt_tinfo_var : tt_tinfo_typedef);
8459 u (ix);
8460 }
8461
8462 if (is_var)
8463 {
8464 /* We also need the type it is for and mangled name, so
8465 the reader doesn't need to complete the type (which
8466 would break section ordering). The type it is for is
8467 stashed on the name's TREE_TYPE. */
8468 tree name = DECL_NAME (decl);
8469 tree_node (name);
8470 type = TREE_TYPE (name);
8471 tree_node (type);
8472 }
8473
8474 int tag = insert (decl);
8475 if (streaming_p ())
8476 dump (dumper::TREE)
8477 && dump ("Wrote tinfo_%s:%d %u %N", is_var ? "var" : "type",
8478 tag, ix, type);
8479
8480 if (!is_var)
8481 {
8482 tag = insert (type);
8483 if (streaming_p ())
8484 dump (dumper::TREE)
8485 && dump ("Wrote tinfo_type:%d %u %N", tag, ix, type);
8486 }
8487 return false;
8488 }
8489 break;
8490
8491 case TYPE_DECL:
8492 if (DECL_TINFO_P (decl))
8493 goto tinfo;
8494 break;
8495 }
8496
8497 if (DECL_THUNK_P (decl))
8498 {
8499 /* Thunks are similar to binfos -- write the thunked-to decl and
8500 then thunk-specific key info. */
8501 if (streaming_p ())
8502 {
8503 i (tt_thunk);
8504 i (THUNK_FIXED_OFFSET (decl));
8505 }
8506
8507 tree target = decl;
8508 while (DECL_THUNK_P (target))
8509 target = THUNK_TARGET (target);
8510 tree_node (target);
8511 tree_node (THUNK_VIRTUAL_OFFSET (decl));
8512 int tag = insert (decl);
8513 if (streaming_p ())
8514 dump (dumper::TREE)
8515 && dump ("Wrote:%d thunk %N to %N", tag, DECL_NAME (decl), target);
8516 return false;
8517 }
8518
8519 if (DECL_CLONED_FUNCTION_P (decl))
8520 {
8521 tree target = get_clone_target (decl);
8522 if (streaming_p ())
8523 i (tt_clone_ref);
8524
8525 tree_node (target);
8526 tree_node (DECL_NAME (decl));
8527 int tag = insert (decl);
8528 if (streaming_p ())
8529 dump (dumper::TREE)
8530 && dump ("Wrote:%d clone %N of %N", tag, DECL_NAME (decl), target);
8531 return false;
8532 }
8533
8534 /* Everything left should be a thing that is in the entity table.
8535 Mostly things that can be defined outside of their (original
8536 declaration) context. */
8537 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
8538 || TREE_CODE (decl) == VAR_DECL
8539 || TREE_CODE (decl) == FUNCTION_DECL
8540 || TREE_CODE (decl) == TYPE_DECL
8541 || TREE_CODE (decl) == USING_DECL
8542 || TREE_CODE (decl) == CONCEPT_DECL
8543 || TREE_CODE (decl) == NAMESPACE_DECL);
8544
8545 int use_tpl = -1;
8546 tree ti = node_template_info (decl, use_tpl);
8547 tree tpl = NULL_TREE;
8548
8549 /* If this is the TEMPLATE_DECL_RESULT of a TEMPLATE_DECL, get the
8550 TEMPLATE_DECL. Note TI_TEMPLATE is not a TEMPLATE_DECL for
8551 (some) friends, so we need to check that. */
8552 // FIXME: Should local friend template specializations be by value?
8553 // They don't get idents so we'll never know they're imported, but I
8554 // think we can only reach them from the TU that defines the
8555 // befriending class?
8556 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
8557 && DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == decl)
8558 {
8559 tpl = TI_TEMPLATE (ti);
8560 partial_template:
8561 if (streaming_p ())
8562 {
8563 i (tt_template);
8564 dump (dumper::TREE)
8565 && dump ("Writing implicit template %C:%N%S",
8566 TREE_CODE (tpl), tpl, tpl);
8567 }
8568 tree_node (tpl);
8569
8570 /* Streaming TPL caused us to visit DECL and maybe its type. */
8571 gcc_checking_assert (TREE_VISITED (decl));
8572 if (DECL_IMPLICIT_TYPEDEF_P (decl))
8573 gcc_checking_assert (TREE_VISITED (TREE_TYPE (decl)));
8574 return false;
8575 }
8576
8577 tree ctx = CP_DECL_CONTEXT (decl);
8578 depset *dep = NULL;
8579 if (streaming_p ())
8580 dep = dep_hash->find_dependency (decl);
8581 else if (TREE_CODE (ctx) != FUNCTION_DECL
8582 || TREE_CODE (decl) == TEMPLATE_DECL
8583 || (dep_hash->sneakoscope && DECL_IMPLICIT_TYPEDEF_P (decl))
8584 || (DECL_LANG_SPECIFIC (decl)
8585 && DECL_MODULE_IMPORT_P (decl)))
8586 dep = dep_hash->add_dependency (decl,
8587 TREE_CODE (decl) == NAMESPACE_DECL
8588 && !DECL_NAMESPACE_ALIAS (decl)
8589 ? depset::EK_NAMESPACE : depset::EK_DECL);
8590
8591 if (!dep)
8592 {
8593 /* Some internal entity of context. Do by value. */
8594 decl_value (decl, NULL);
8595 return false;
8596 }
8597
8598 if (dep->get_entity_kind () == depset::EK_REDIRECT)
8599 {
8600 /* The DECL_TEMPLATE_RESULT of a partial specialization.
8601 Write the partial specialization's template. */
8602 depset *redirect = dep->deps[0];
8603 gcc_checking_assert (redirect->get_entity_kind () == depset::EK_PARTIAL);
8604 tpl = redirect->get_entity ();
8605 goto partial_template;
8606 }
8607
8608 if (streaming_p ())
8609 {
8610 /* Locate the entity. */
8611 unsigned index = dep->cluster;
8612 unsigned import = 0;
8613
8614 if (dep->is_import ())
8615 import = dep->section;
8616 else if (CHECKING_P)
8617 /* It should be what we put there. */
8618 gcc_checking_assert (index == ~import_entity_index (decl));
8619
8620 #if CHECKING_P
8621 if (importedness)
8622 gcc_assert (!import == (importedness < 0));
8623 #endif
8624 i (tt_entity);
8625 u (import);
8626 u (index);
8627 }
8628
8629 int tag = insert (decl);
8630 if (streaming_p () && dump (dumper::TREE))
8631 {
8632 char const *kind = "import";
8633 module_state *from = (*modules)[0];
8634 if (dep->is_import ())
8635 /* Rediscover the unremapped index. */
8636 from = import_entity_module (import_entity_index (decl));
8637 else
8638 {
8639 tree o = get_originating_module_decl (decl);
8640 kind = (DECL_LANG_SPECIFIC (o) && DECL_MODULE_PURVIEW_P (o)
8641 ? "purview" : "GMF");
8642 }
8643 dump ("Wrote %s:%d %C:%N@%M", kind,
8644 tag, TREE_CODE (decl), decl, from);
8645 }
8646
8647 add_indirects (decl);
8648
8649 return false;
8650 }
8651
8652 void
8653 trees_out::type_node (tree type)
8654 {
8655 gcc_assert (TYPE_P (type));
8656
8657 tree root = (TYPE_NAME (type)
8658 ? TREE_TYPE (TYPE_NAME (type)) : TYPE_MAIN_VARIANT (type));
8659
8660 if (type != root)
8661 {
8662 if (streaming_p ())
8663 i (tt_variant_type);
8664 tree_node (root);
8665
8666 int flags = -1;
8667
8668 if (TREE_CODE (type) == FUNCTION_TYPE
8669 || TREE_CODE (type) == METHOD_TYPE)
8670 {
8671 int quals = type_memfn_quals (type);
8672 int rquals = type_memfn_rqual (type);
8673 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8674 bool late = TYPE_HAS_LATE_RETURN_TYPE (type);
8675
8676 if (raises != TYPE_RAISES_EXCEPTIONS (root)
8677 || rquals != type_memfn_rqual (root)
8678 || quals != type_memfn_quals (root)
8679 || late != TYPE_HAS_LATE_RETURN_TYPE (root))
8680 flags = rquals | (int (late) << 2) | (quals << 3);
8681 }
8682 else
8683 {
8684 if (TYPE_USER_ALIGN (type))
8685 flags = exact_log2 (TYPE_ALIGN (type));
8686 }
8687
8688 if (streaming_p ())
8689 i (flags);
8690
8691 if (flags < 0)
8692 ;
8693 else if (TREE_CODE (type) == FUNCTION_TYPE
8694 || TREE_CODE (type) == METHOD_TYPE)
8695 {
8696 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8697 if (raises == TYPE_RAISES_EXCEPTIONS (root))
8698 raises = error_mark_node;
8699 tree_node (raises);
8700 }
8701
8702 tree_node (TYPE_ATTRIBUTES (type));
8703
8704 if (streaming_p ())
8705 {
8706 /* Qualifiers. */
8707 int rquals = cp_type_quals (root);
8708 int quals = cp_type_quals (type);
8709 if (quals == rquals)
8710 quals = -1;
8711 i (quals);
8712 }
8713
8714 if (ref_node (type) != WK_none)
8715 {
8716 int tag = insert (type);
8717 if (streaming_p ())
8718 {
8719 i (0);
8720 dump (dumper::TREE)
8721 && dump ("Wrote:%d variant type %C", tag, TREE_CODE (type));
8722 }
8723 }
8724 return;
8725 }
8726
8727 if (tree name = TYPE_NAME (type))
8728 if ((TREE_CODE (name) == TYPE_DECL && DECL_ORIGINAL_TYPE (name))
8729 || DECL_TEMPLATE_PARM_P (name)
8730 || TREE_CODE (type) == RECORD_TYPE
8731 || TREE_CODE (type) == UNION_TYPE
8732 || TREE_CODE (type) == ENUMERAL_TYPE)
8733 {
8734 /* We can meet template parms that we didn't meet in the
8735 tpl_parms walk, because we're referring to a derived type
8736 that was previously constructed from equivalent template
8737 parms. */
8738 if (streaming_p ())
8739 {
8740 i (tt_typedef_type);
8741 dump (dumper::TREE)
8742 && dump ("Writing %stypedef %C:%N",
8743 DECL_IMPLICIT_TYPEDEF_P (name) ? "implicit " : "",
8744 TREE_CODE (name), name);
8745 }
8746 tree_node (name);
8747 if (streaming_p ())
8748 dump (dumper::TREE) && dump ("Wrote typedef %C:%N%S",
8749 TREE_CODE (name), name, name);
8750 gcc_checking_assert (TREE_VISITED (type));
8751 return;
8752 }
8753
8754 if (TYPE_PTRMEMFUNC_P (type))
8755 {
8756 /* This is a distinct type node, masquerading as a structure. */
8757 tree fn_type = TYPE_PTRMEMFUNC_FN_TYPE (type);
8758 if (streaming_p ())
8759 i (tt_ptrmem_type);
8760 tree_node (fn_type);
8761 int tag = insert (type);
8762 if (streaming_p ())
8763 dump (dumper::TREE) && dump ("Written:%d ptrmem type", tag);
8764 return;
8765 }
8766
8767 if (streaming_p ())
8768 {
8769 u (tt_derived_type);
8770 u (TREE_CODE (type));
8771 }
8772
8773 tree_node (TREE_TYPE (type));
8774 switch (TREE_CODE (type))
8775 {
8776 default:
8777 /* We should never meet a type here that is indescribable in
8778 terms of other types. */
8779 gcc_unreachable ();
8780
8781 case ARRAY_TYPE:
8782 tree_node (TYPE_DOMAIN (type));
8783 if (streaming_p ())
8784 /* Dependent arrays are constructed with TYPE_DEPENENT_P
8785 already set. */
8786 u (TYPE_DEPENDENT_P (type));
8787 break;
8788
8789 case COMPLEX_TYPE:
8790 /* No additional data. */
8791 break;
8792
8793 case BOOLEAN_TYPE:
8794 /* A non-standard boolean type. */
8795 if (streaming_p ())
8796 u (TYPE_PRECISION (type));
8797 break;
8798
8799 case INTEGER_TYPE:
8800 if (TREE_TYPE (type))
8801 {
8802 /* A range type (representing an array domain). */
8803 tree_node (TYPE_MIN_VALUE (type));
8804 tree_node (TYPE_MAX_VALUE (type));
8805 }
8806 else
8807 {
8808 /* A new integral type (representing a bitfield). */
8809 if (streaming_p ())
8810 {
8811 unsigned prec = TYPE_PRECISION (type);
8812 bool unsigned_p = TYPE_UNSIGNED (type);
8813
8814 u ((prec << 1) | unsigned_p);
8815 }
8816 }
8817 break;
8818
8819 case METHOD_TYPE:
8820 case FUNCTION_TYPE:
8821 {
8822 gcc_checking_assert (type_memfn_rqual (type) == REF_QUAL_NONE);
8823
8824 tree arg_types = TYPE_ARG_TYPES (type);
8825 if (TREE_CODE (type) == METHOD_TYPE)
8826 {
8827 tree_node (TREE_TYPE (TREE_VALUE (arg_types)));
8828 arg_types = TREE_CHAIN (arg_types);
8829 }
8830 tree_node (arg_types);
8831 }
8832 break;
8833
8834 case OFFSET_TYPE:
8835 tree_node (TYPE_OFFSET_BASETYPE (type));
8836 break;
8837
8838 case POINTER_TYPE:
8839 /* No additional data. */
8840 break;
8841
8842 case REFERENCE_TYPE:
8843 if (streaming_p ())
8844 u (TYPE_REF_IS_RVALUE (type));
8845 break;
8846
8847 case DECLTYPE_TYPE:
8848 case TYPEOF_TYPE:
8849 case UNDERLYING_TYPE:
8850 tree_node (TYPE_VALUES_RAW (type));
8851 if (TREE_CODE (type) == DECLTYPE_TYPE)
8852 /* We stash a whole bunch of things into decltype's
8853 flags. */
8854 if (streaming_p ())
8855 tree_node_bools (type);
8856 break;
8857
8858 case TYPE_ARGUMENT_PACK:
8859 /* No additional data. */
8860 break;
8861
8862 case TYPE_PACK_EXPANSION:
8863 if (streaming_p ())
8864 u (PACK_EXPANSION_LOCAL_P (type));
8865 tree_node (PACK_EXPANSION_PARAMETER_PACKS (type));
8866 break;
8867
8868 case TYPENAME_TYPE:
8869 {
8870 tree_node (TYPE_CONTEXT (type));
8871 tree_node (DECL_NAME (TYPE_NAME (type)));
8872 tree_node (TYPENAME_TYPE_FULLNAME (type));
8873 if (streaming_p ())
8874 {
8875 enum tag_types tag_type = none_type;
8876 if (TYPENAME_IS_ENUM_P (type))
8877 tag_type = enum_type;
8878 else if (TYPENAME_IS_CLASS_P (type))
8879 tag_type = class_type;
8880 u (int (tag_type));
8881 }
8882 }
8883 break;
8884
8885 case UNBOUND_CLASS_TEMPLATE:
8886 {
8887 tree decl = TYPE_NAME (type);
8888 tree_node (DECL_CONTEXT (decl));
8889 tree_node (DECL_NAME (decl));
8890 tree_node (DECL_TEMPLATE_PARMS (decl));
8891 }
8892 break;
8893
8894 case VECTOR_TYPE:
8895 if (streaming_p ())
8896 {
8897 poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type);
8898 /* to_constant asserts that only coeff[0] is of interest. */
8899 wu (static_cast<unsigned HOST_WIDE_INT> (nunits.to_constant ()));
8900 }
8901 break;
8902 }
8903
8904 /* We may have met the type during emitting the above. */
8905 if (ref_node (type) != WK_none)
8906 {
8907 int tag = insert (type);
8908 if (streaming_p ())
8909 {
8910 i (0);
8911 dump (dumper::TREE)
8912 && dump ("Wrote:%d derived type %C", tag, TREE_CODE (type));
8913 }
8914 }
8915
8916 return;
8917 }
8918
8919 /* T is (mostly*) a non-mergeable node that must be written by value.
8920 The mergeable case is a BINFO, which are as-if DECLSs. */
8921
8922 void
8923 trees_out::tree_value (tree t)
8924 {
8925 /* We should never be writing a type by value. tree_type should
8926 have streamed it, or we're going via its TYPE_DECL. */
8927 gcc_checking_assert (!TYPE_P (t));
8928
8929 if (DECL_P (t))
8930 /* No template, type, var or function, except anonymous
8931 non-context vars. */
8932 gcc_checking_assert ((TREE_CODE (t) != TEMPLATE_DECL
8933 && TREE_CODE (t) != TYPE_DECL
8934 && (TREE_CODE (t) != VAR_DECL
8935 || (!DECL_NAME (t) && !DECL_CONTEXT (t)))
8936 && TREE_CODE (t) != FUNCTION_DECL));
8937
8938 if (streaming_p ())
8939 {
8940 /* A new node -> tt_node. */
8941 tree_val_count++;
8942 i (tt_node);
8943 start (t);
8944 tree_node_bools (t);
8945 }
8946
8947 if (TREE_CODE (t) == TREE_BINFO)
8948 /* Binfos are decl-like and need merging information. */
8949 binfo_mergeable (t);
8950
8951 int tag = insert (t, WK_value);
8952 if (streaming_p ())
8953 dump (dumper::TREE)
8954 && dump ("Writing tree:%d %C:%N", tag, TREE_CODE (t), t);
8955
8956 tree_node_vals (t);
8957
8958 if (streaming_p ())
8959 dump (dumper::TREE) && dump ("Written tree:%d %C:%N", tag, TREE_CODE (t), t);
8960 }
8961
8962 tree
8963 trees_in::tree_value ()
8964 {
8965 tree t = start ();
8966 if (!t || !tree_node_bools (t))
8967 return NULL_TREE;
8968
8969 tree existing = t;
8970 if (TREE_CODE (t) == TREE_BINFO)
8971 {
8972 tree type;
8973 unsigned ix = binfo_mergeable (&type);
8974 if (TYPE_BINFO (type))
8975 {
8976 /* We already have a definition, this must be a duplicate. */
8977 dump (dumper::MERGE)
8978 && dump ("Deduping binfo %N[%u]", type, ix);
8979 existing = TYPE_BINFO (type);
8980 while (existing && ix)
8981 existing = TREE_CHAIN (existing);
8982 if (existing)
8983 register_duplicate (t, existing);
8984 else
8985 /* Error, mismatch -- diagnose in read_class_def's
8986 checking. */
8987 existing = t;
8988 }
8989 }
8990
8991 /* Insert into map. */
8992 int tag = insert (existing);
8993 dump (dumper::TREE)
8994 && dump ("Reading tree:%d %C", tag, TREE_CODE (t));
8995
8996 if (!tree_node_vals (t))
8997 {
8998 back_refs[~tag] = NULL_TREE;
8999 set_overrun ();
9000 /* Bail. */
9001 return NULL_TREE;
9002 }
9003
9004 dump (dumper::TREE) && dump ("Read tree:%d %C:%N", tag, TREE_CODE (t), t);
9005
9006 if (TREE_CODE (existing) == INTEGER_CST && !TREE_OVERFLOW (existing))
9007 {
9008 existing = cache_integer_cst (t, true);
9009 back_refs[~tag] = existing;
9010 }
9011
9012 return existing;
9013 }
9014
9015 /* Stream out tree node T. We automatically create local back
9016 references, which is essentially a single pass lisp
9017 self-referential structure pretty-printer. */
9018
9019 void
9020 trees_out::tree_node (tree t)
9021 {
9022 dump.indent ();
9023 walk_kind ref = ref_node (t);
9024 if (ref == WK_none)
9025 goto done;
9026
9027 if (ref != WK_normal)
9028 goto skip_normal;
9029
9030 if (TREE_CODE (t) == IDENTIFIER_NODE)
9031 {
9032 /* An identifier node -> tt_id, tt_conv_id, tt_anon_id, tt_lambda_id. */
9033 int code = tt_id;
9034 if (IDENTIFIER_ANON_P (t))
9035 code = IDENTIFIER_LAMBDA_P (t) ? tt_lambda_id : tt_anon_id;
9036 else if (IDENTIFIER_CONV_OP_P (t))
9037 code = tt_conv_id;
9038
9039 if (streaming_p ())
9040 i (code);
9041
9042 if (code == tt_conv_id)
9043 {
9044 tree type = TREE_TYPE (t);
9045 gcc_checking_assert (type || t == conv_op_identifier);
9046 tree_node (type);
9047 }
9048 else if (code == tt_id && streaming_p ())
9049 str (IDENTIFIER_POINTER (t), IDENTIFIER_LENGTH (t));
9050
9051 int tag = insert (t);
9052 if (streaming_p ())
9053 {
9054 /* We know the ordering of the 4 id tags. */
9055 static const char *const kinds[] =
9056 {"", "conv_op ", "anon ", "lambda "};
9057 dump (dumper::TREE)
9058 && dump ("Written:%d %sidentifier:%N", tag,
9059 kinds[code - tt_id],
9060 code == tt_conv_id ? TREE_TYPE (t) : t);
9061 }
9062 goto done;
9063 }
9064
9065 if (TREE_CODE (t) == TREE_BINFO)
9066 {
9067 /* A BINFO -> tt_binfo.
9068 We must do this by reference. We stream the binfo tree
9069 itself when streaming its owning RECORD_TYPE. That we got
9070 here means the dominating type is not in this SCC. */
9071 if (streaming_p ())
9072 i (tt_binfo);
9073 binfo_mergeable (t);
9074 gcc_checking_assert (!TREE_VISITED (t));
9075 int tag = insert (t);
9076 if (streaming_p ())
9077 dump (dumper::TREE) && dump ("Inserting binfo:%d %N", tag, t);
9078 goto done;
9079 }
9080
9081 if (TREE_CODE (t) == INTEGER_CST
9082 && !TREE_OVERFLOW (t)
9083 && TREE_CODE (TREE_TYPE (t)) == ENUMERAL_TYPE)
9084 {
9085 /* An integral constant of enumeral type. See if it matches one
9086 of the enumeration values. */
9087 for (tree values = TYPE_VALUES (TREE_TYPE (t));
9088 values; values = TREE_CHAIN (values))
9089 {
9090 tree decl = TREE_VALUE (values);
9091 if (tree_int_cst_equal (DECL_INITIAL (decl), t))
9092 {
9093 if (streaming_p ())
9094 u (tt_enum_value);
9095 tree_node (decl);
9096 dump (dumper::TREE) && dump ("Written enum value %N", decl);
9097 goto done;
9098 }
9099 }
9100 /* It didn't match. We'll write it a an explicit INTEGER_CST
9101 node. */
9102 }
9103
9104 if (TYPE_P (t))
9105 {
9106 type_node (t);
9107 goto done;
9108 }
9109
9110 if (DECL_P (t))
9111 {
9112 if (DECL_TEMPLATE_PARM_P (t))
9113 {
9114 tpl_parm_value (t);
9115 goto done;
9116 }
9117
9118 if (!DECL_CONTEXT (t))
9119 {
9120 /* There are a few cases of decls with no context. We'll write
9121 these by value, but first assert they are cases we expect. */
9122 gcc_checking_assert (ref == WK_normal);
9123 switch (TREE_CODE (t))
9124 {
9125 default: gcc_unreachable ();
9126
9127 case LABEL_DECL:
9128 /* CASE_LABEL_EXPRs contain uncontexted LABEL_DECLs. */
9129 gcc_checking_assert (!DECL_NAME (t));
9130 break;
9131
9132 case VAR_DECL:
9133 /* AGGR_INIT_EXPRs cons up anonymous uncontexted VAR_DECLs. */
9134 gcc_checking_assert (!DECL_NAME (t)
9135 && DECL_ARTIFICIAL (t));
9136 break;
9137
9138 case PARM_DECL:
9139 /* REQUIRES_EXPRs have a tree list of uncontexted
9140 PARM_DECLS. It'd be nice if they had a
9141 distinguishing flag to double check. */
9142 break;
9143 }
9144 goto by_value;
9145 }
9146 }
9147
9148 skip_normal:
9149 if (DECL_P (t) && !decl_node (t, ref))
9150 goto done;
9151
9152 /* Otherwise by value */
9153 by_value:
9154 tree_value (t);
9155
9156 done:
9157 /* And, breath out. */
9158 dump.outdent ();
9159 }
9160
9161 /* Stream in a tree node. */
9162
9163 tree
9164 trees_in::tree_node (bool is_use)
9165 {
9166 if (get_overrun ())
9167 return NULL_TREE;
9168
9169 dump.indent ();
9170 int tag = i ();
9171 tree res = NULL_TREE;
9172 switch (tag)
9173 {
9174 default:
9175 /* backref, pull it out of the map. */
9176 res = back_ref (tag);
9177 break;
9178
9179 case tt_null:
9180 /* NULL_TREE. */
9181 break;
9182
9183 case tt_fixed:
9184 /* A fixed ref, find it in the fixed_ref array. */
9185 {
9186 unsigned fix = u ();
9187 if (fix < (*fixed_trees).length ())
9188 {
9189 res = (*fixed_trees)[fix];
9190 dump (dumper::TREE) && dump ("Read fixed:%u %C:%N%S", fix,
9191 TREE_CODE (res), res, res);
9192 }
9193
9194 if (!res)
9195 set_overrun ();
9196 }
9197 break;
9198
9199 case tt_parm:
9200 {
9201 tree fn = tree_node ();
9202 if (fn && TREE_CODE (fn) == FUNCTION_DECL)
9203 res = tree_node ();
9204 if (res)
9205 dump (dumper::TREE)
9206 && dump ("Read %s reference %N",
9207 TREE_CODE (res) == PARM_DECL ? "parameter" : "result",
9208 res);
9209 }
9210 break;
9211
9212 case tt_node:
9213 /* A new node. Stream it in. */
9214 res = tree_value ();
9215 break;
9216
9217 case tt_decl:
9218 /* A new decl. Stream it in. */
9219 res = decl_value ();
9220 break;
9221
9222 case tt_tpl_parm:
9223 /* A template parameter. Stream it in. */
9224 res = tpl_parm_value ();
9225 break;
9226
9227 case tt_id:
9228 /* An identifier node. */
9229 {
9230 size_t l;
9231 const char *chars = str (&l);
9232 res = get_identifier_with_length (chars, l);
9233 int tag = insert (res);
9234 dump (dumper::TREE)
9235 && dump ("Read identifier:%d %N", tag, res);
9236 }
9237 break;
9238
9239 case tt_conv_id:
9240 /* A conversion operator. Get the type and recreate the
9241 identifier. */
9242 {
9243 tree type = tree_node ();
9244 if (!get_overrun ())
9245 {
9246 res = type ? make_conv_op_name (type) : conv_op_identifier;
9247 int tag = insert (res);
9248 dump (dumper::TREE)
9249 && dump ("Created conv_op:%d %S for %N", tag, res, type);
9250 }
9251 }
9252 break;
9253
9254 case tt_anon_id:
9255 case tt_lambda_id:
9256 /* An anonymous or lambda id. */
9257 {
9258 res = make_anon_name ();
9259 if (tag == tt_lambda_id)
9260 IDENTIFIER_LAMBDA_P (res) = true;
9261 int tag = insert (res);
9262 dump (dumper::TREE)
9263 && dump ("Read %s identifier:%d %N",
9264 IDENTIFIER_LAMBDA_P (res) ? "lambda" : "anon", tag, res);
9265 }
9266 break;
9267
9268 case tt_typedef_type:
9269 res = tree_node ();
9270 if (res)
9271 {
9272 dump (dumper::TREE)
9273 && dump ("Read %stypedef %C:%N",
9274 DECL_IMPLICIT_TYPEDEF_P (res) ? "implicit " : "",
9275 TREE_CODE (res), res);
9276 res = TREE_TYPE (res);
9277 }
9278 break;
9279
9280 case tt_derived_type:
9281 /* A type derived from some other type. */
9282 {
9283 enum tree_code code = tree_code (u ());
9284 res = tree_node ();
9285
9286 switch (code)
9287 {
9288 default:
9289 set_overrun ();
9290 break;
9291
9292 case ARRAY_TYPE:
9293 {
9294 tree domain = tree_node ();
9295 int dep = u ();
9296 if (!get_overrun ())
9297 res = build_cplus_array_type (res, domain, dep);
9298 }
9299 break;
9300
9301 case COMPLEX_TYPE:
9302 if (!get_overrun ())
9303 res = build_complex_type (res);
9304 break;
9305
9306 case BOOLEAN_TYPE:
9307 {
9308 unsigned precision = u ();
9309 if (!get_overrun ())
9310 res = build_nonstandard_boolean_type (precision);
9311 }
9312 break;
9313
9314 case INTEGER_TYPE:
9315 if (res)
9316 {
9317 /* A range type (representing an array domain). */
9318 tree min = tree_node ();
9319 tree max = tree_node ();
9320
9321 if (!get_overrun ())
9322 res = build_range_type (res, min, max);
9323 }
9324 else
9325 {
9326 /* A new integral type (representing a bitfield). */
9327 unsigned enc = u ();
9328 if (!get_overrun ())
9329 res = build_nonstandard_integer_type (enc >> 1, enc & 1);
9330 }
9331 break;
9332
9333 case FUNCTION_TYPE:
9334 case METHOD_TYPE:
9335 {
9336 tree klass = code == METHOD_TYPE ? tree_node () : NULL_TREE;
9337 tree args = tree_node ();
9338 if (!get_overrun ())
9339 {
9340 if (klass)
9341 res = build_method_type_directly (klass, res, args);
9342 else
9343 res = build_function_type (res, args);
9344 }
9345 }
9346 break;
9347
9348 case OFFSET_TYPE:
9349 {
9350 tree base = tree_node ();
9351 if (!get_overrun ())
9352 res = build_offset_type (base, res);
9353 }
9354 break;
9355
9356 case POINTER_TYPE:
9357 if (!get_overrun ())
9358 res = build_pointer_type (res);
9359 break;
9360
9361 case REFERENCE_TYPE:
9362 {
9363 bool rval = bool (u ());
9364 if (!get_overrun ())
9365 res = cp_build_reference_type (res, rval);
9366 }
9367 break;
9368
9369 case DECLTYPE_TYPE:
9370 case TYPEOF_TYPE:
9371 case UNDERLYING_TYPE:
9372 {
9373 tree expr = tree_node ();
9374 if (!get_overrun ())
9375 {
9376 res = cxx_make_type (code);
9377 TYPE_VALUES_RAW (res) = expr;
9378 if (code == DECLTYPE_TYPE)
9379 tree_node_bools (res);
9380 SET_TYPE_STRUCTURAL_EQUALITY (res);
9381 }
9382 }
9383 break;
9384
9385 case TYPE_ARGUMENT_PACK:
9386 if (!get_overrun ())
9387 {
9388 tree pack = cxx_make_type (TYPE_ARGUMENT_PACK);
9389 SET_ARGUMENT_PACK_ARGS (pack, res);
9390 res = pack;
9391 }
9392 break;
9393
9394 case TYPE_PACK_EXPANSION:
9395 {
9396 bool local = u ();
9397 tree param_packs = tree_node ();
9398 if (!get_overrun ())
9399 {
9400 tree expn = cxx_make_type (TYPE_PACK_EXPANSION);
9401 SET_TYPE_STRUCTURAL_EQUALITY (expn);
9402 SET_PACK_EXPANSION_PATTERN (expn, res);
9403 PACK_EXPANSION_PARAMETER_PACKS (expn) = param_packs;
9404 PACK_EXPANSION_LOCAL_P (expn) = local;
9405 res = expn;
9406 }
9407 }
9408 break;
9409
9410 case TYPENAME_TYPE:
9411 {
9412 tree ctx = tree_node ();
9413 tree name = tree_node ();
9414 tree fullname = tree_node ();
9415 enum tag_types tag_type = tag_types (u ());
9416
9417 if (!get_overrun ())
9418 res = build_typename_type (ctx, name, fullname, tag_type);
9419 }
9420 break;
9421
9422 case UNBOUND_CLASS_TEMPLATE:
9423 {
9424 tree ctx = tree_node ();
9425 tree name = tree_node ();
9426 tree parms = tree_node ();
9427
9428 if (!get_overrun ())
9429 res = make_unbound_class_template_raw (ctx, name, parms);
9430 }
9431 break;
9432
9433 case VECTOR_TYPE:
9434 {
9435 unsigned HOST_WIDE_INT nunits = wu ();
9436 if (!get_overrun ())
9437 res = build_vector_type (res, static_cast<poly_int64> (nunits));
9438 }
9439 break;
9440 }
9441
9442 int tag = i ();
9443 if (!tag)
9444 {
9445 tag = insert (res);
9446 if (res)
9447 dump (dumper::TREE)
9448 && dump ("Created:%d derived type %C", tag, code);
9449 }
9450 else
9451 res = back_ref (tag);
9452 }
9453 break;
9454
9455 case tt_variant_type:
9456 /* Variant of some type. */
9457 {
9458 res = tree_node ();
9459 int flags = i ();
9460 if (get_overrun ())
9461 ;
9462 else if (flags < 0)
9463 /* No change. */;
9464 else if (TREE_CODE (res) == FUNCTION_TYPE
9465 || TREE_CODE (res) == METHOD_TYPE)
9466 {
9467 cp_ref_qualifier rqual = cp_ref_qualifier (flags & 3);
9468 bool late = (flags >> 2) & 1;
9469 cp_cv_quals quals = cp_cv_quals (flags >> 3);
9470
9471 tree raises = tree_node ();
9472 if (raises == error_mark_node)
9473 raises = TYPE_RAISES_EXCEPTIONS (res);
9474
9475 res = build_cp_fntype_variant (res, rqual, raises, late);
9476 if (TREE_CODE (res) == FUNCTION_TYPE)
9477 res = apply_memfn_quals (res, quals, rqual);
9478 }
9479 else
9480 {
9481 res = build_aligned_type (res, 1u << flags);
9482 TYPE_USER_ALIGN (res) = true;
9483 }
9484
9485 if (tree attribs = tree_node ())
9486 res = cp_build_type_attribute_variant (res, attribs);
9487
9488 int quals = i ();
9489 if (quals >= 0 && !get_overrun ())
9490 res = cp_build_qualified_type (res, quals);
9491
9492 int tag = i ();
9493 if (!tag)
9494 {
9495 tag = insert (res);
9496 if (res)
9497 dump (dumper::TREE)
9498 && dump ("Created:%d variant type %C", tag, TREE_CODE (res));
9499 }
9500 else
9501 res = back_ref (tag);
9502 }
9503 break;
9504
9505 case tt_tinfo_var:
9506 case tt_tinfo_typedef:
9507 /* A tinfo var or typedef. */
9508 {
9509 bool is_var = tag == tt_tinfo_var;
9510 unsigned ix = u ();
9511 tree type = NULL_TREE;
9512
9513 if (is_var)
9514 {
9515 tree name = tree_node ();
9516 type = tree_node ();
9517
9518 if (!get_overrun ())
9519 res = get_tinfo_decl_direct (type, name, int (ix));
9520 }
9521 else
9522 {
9523 if (!get_overrun ())
9524 {
9525 type = get_pseudo_tinfo_type (ix);
9526 res = TYPE_NAME (type);
9527 }
9528 }
9529 if (res)
9530 {
9531 int tag = insert (res);
9532 dump (dumper::TREE)
9533 && dump ("Created tinfo_%s:%d %S:%u for %N",
9534 is_var ? "var" : "decl", tag, res, ix, type);
9535 if (!is_var)
9536 {
9537 tag = insert (type);
9538 dump (dumper::TREE)
9539 && dump ("Created tinfo_type:%d %u %N", tag, ix, type);
9540 }
9541 }
9542 }
9543 break;
9544
9545 case tt_ptrmem_type:
9546 /* A pointer to member function. */
9547 {
9548 tree type = tree_node ();
9549 if (type && TREE_CODE (type) == POINTER_TYPE
9550 && TREE_CODE (TREE_TYPE (type)) == METHOD_TYPE)
9551 {
9552 res = build_ptrmemfunc_type (type);
9553 int tag = insert (res);
9554 dump (dumper::TREE) && dump ("Created:%d ptrmem type", tag);
9555 }
9556 else
9557 set_overrun ();
9558 }
9559 break;
9560
9561 case tt_enum_value:
9562 /* An enum const value. */
9563 {
9564 if (tree decl = tree_node ())
9565 {
9566 dump (dumper::TREE) && dump ("Read enum value %N", decl);
9567 res = DECL_INITIAL (decl);
9568 }
9569
9570 if (!res)
9571 set_overrun ();
9572 }
9573 break;
9574
9575 case tt_enum_decl:
9576 /* An enum decl. */
9577 {
9578 tree ctx = tree_node ();
9579 tree name = tree_node ();
9580
9581 if (!get_overrun ()
9582 && TREE_CODE (ctx) == ENUMERAL_TYPE)
9583 res = find_enum_member (ctx, name);
9584
9585 if (!res)
9586 set_overrun ();
9587 else
9588 {
9589 int tag = insert (res);
9590 dump (dumper::TREE)
9591 && dump ("Read enum decl:%d %C:%N", tag, TREE_CODE (res), res);
9592 }
9593 }
9594 break;
9595
9596 case tt_data_member:
9597 /* A data member. */
9598 {
9599 tree ctx = tree_node ();
9600 tree name = tree_node ();
9601
9602 if (!get_overrun ()
9603 && RECORD_OR_UNION_TYPE_P (ctx))
9604 {
9605 if (name)
9606 res = lookup_class_binding (ctx, name);
9607 else
9608 res = lookup_field_ident (ctx, u ());
9609
9610 if (!res
9611 || TREE_CODE (res) != FIELD_DECL
9612 || DECL_CONTEXT (res) != ctx)
9613 res = NULL_TREE;
9614 }
9615
9616 if (!res)
9617 set_overrun ();
9618 else
9619 {
9620 int tag = insert (res);
9621 dump (dumper::TREE)
9622 && dump ("Read member:%d %C:%N", tag, TREE_CODE (res), res);
9623 }
9624 }
9625 break;
9626
9627 case tt_binfo:
9628 /* A BINFO. Walk the tree of the dominating type. */
9629 {
9630 tree type;
9631 unsigned ix = binfo_mergeable (&type);
9632 if (type)
9633 {
9634 res = TYPE_BINFO (type);
9635 for (; ix && res; res = TREE_CHAIN (res))
9636 ix--;
9637 if (!res)
9638 set_overrun ();
9639 }
9640
9641 if (get_overrun ())
9642 break;
9643
9644 /* Insert binfo into backreferences. */
9645 tag = insert (res);
9646 dump (dumper::TREE) && dump ("Read binfo:%d %N", tag, res);
9647 }
9648 break;
9649
9650 case tt_vtable:
9651 {
9652 unsigned ix = u ();
9653 tree ctx = tree_node ();
9654 dump (dumper::TREE) && dump ("Reading vtable %N[%u]", ctx, ix);
9655 if (TREE_CODE (ctx) == RECORD_TYPE && TYPE_LANG_SPECIFIC (ctx))
9656 for (res = CLASSTYPE_VTABLES (ctx); res; res = DECL_CHAIN (res))
9657 if (!ix--)
9658 break;
9659 if (!res)
9660 set_overrun ();
9661 }
9662 break;
9663
9664 case tt_thunk:
9665 {
9666 int fixed = i ();
9667 tree target = tree_node ();
9668 tree virt = tree_node ();
9669
9670 for (tree thunk = DECL_THUNKS (target);
9671 thunk; thunk = DECL_CHAIN (thunk))
9672 if (THUNK_FIXED_OFFSET (thunk) == fixed
9673 && !THUNK_VIRTUAL_OFFSET (thunk) == !virt
9674 && (!virt
9675 || tree_int_cst_equal (virt, THUNK_VIRTUAL_OFFSET (thunk))))
9676 {
9677 res = thunk;
9678 break;
9679 }
9680
9681 int tag = insert (res);
9682 if (res)
9683 dump (dumper::TREE)
9684 && dump ("Read:%d thunk %N to %N", tag, DECL_NAME (res), target);
9685 else
9686 set_overrun ();
9687 }
9688 break;
9689
9690 case tt_clone_ref:
9691 {
9692 tree target = tree_node ();
9693 tree name = tree_node ();
9694
9695 if (DECL_P (target) && DECL_MAYBE_IN_CHARGE_CDTOR_P (target))
9696 {
9697 tree clone;
9698 FOR_EVERY_CLONE (clone, target)
9699 if (DECL_NAME (clone) == name)
9700 {
9701 res = clone;
9702 break;
9703 }
9704 }
9705
9706 if (!res)
9707 set_overrun ();
9708 int tag = insert (res);
9709 if (res)
9710 dump (dumper::TREE)
9711 && dump ("Read:%d clone %N of %N", tag, DECL_NAME (res), target);
9712 else
9713 set_overrun ();
9714 }
9715 break;
9716
9717 case tt_entity:
9718 /* Index into the entity table. Perhaps not loaded yet! */
9719 {
9720 unsigned origin = state->slurp->remap_module (u ());
9721 unsigned ident = u ();
9722 module_state *from = (*modules)[origin];
9723
9724 if (!origin || ident >= from->entity_num)
9725 set_overrun ();
9726 if (!get_overrun ())
9727 {
9728 binding_slot *slot = &(*entity_ary)[from->entity_lwm + ident];
9729 if (slot->is_lazy ())
9730 if (!from->lazy_load (ident, slot))
9731 set_overrun ();
9732 res = *slot;
9733 }
9734
9735 if (res)
9736 {
9737 const char *kind = (origin != state->mod ? "Imported" : "Named");
9738 int tag = insert (res);
9739 dump (dumper::TREE)
9740 && dump ("%s:%d %C:%N@%M", kind, tag, TREE_CODE (res),
9741 res, (*modules)[origin]);
9742
9743 if (!add_indirects (res))
9744 {
9745 set_overrun ();
9746 res = NULL_TREE;
9747 }
9748 }
9749 }
9750 break;
9751
9752 case tt_template:
9753 /* A template. */
9754 if (tree tpl = tree_node ())
9755 {
9756 res = DECL_TEMPLATE_RESULT (tpl);
9757 dump (dumper::TREE)
9758 && dump ("Read template %C:%N", TREE_CODE (res), res);
9759 }
9760 break;
9761 }
9762
9763 if (is_use && !unused && res && DECL_P (res) && !TREE_USED (res))
9764 {
9765 /* Mark decl used as mark_used does -- we cannot call
9766 mark_used in the middle of streaming, we only need a subset
9767 of its functionality. */
9768 TREE_USED (res) = true;
9769
9770 /* And for structured bindings also the underlying decl. */
9771 if (DECL_DECOMPOSITION_P (res) && DECL_DECOMP_BASE (res))
9772 TREE_USED (DECL_DECOMP_BASE (res)) = true;
9773
9774 if (DECL_CLONED_FUNCTION_P (res))
9775 TREE_USED (DECL_CLONED_FUNCTION (res)) = true;
9776 }
9777
9778 dump.outdent ();
9779 return res;
9780 }
9781
9782 void
9783 trees_out::tpl_parms (tree parms, unsigned &tpl_levels)
9784 {
9785 if (!parms)
9786 return;
9787
9788 if (TREE_VISITED (parms))
9789 {
9790 ref_node (parms);
9791 return;
9792 }
9793
9794 tpl_parms (TREE_CHAIN (parms), tpl_levels);
9795
9796 tree vec = TREE_VALUE (parms);
9797 unsigned len = TREE_VEC_LENGTH (vec);
9798 /* Depth. */
9799 int tag = insert (parms);
9800 if (streaming_p ())
9801 {
9802 i (len + 1);
9803 dump (dumper::TREE)
9804 && dump ("Writing template parms:%d level:%N length:%d",
9805 tag, TREE_PURPOSE (parms), len);
9806 }
9807 tree_node (TREE_PURPOSE (parms));
9808
9809 for (unsigned ix = 0; ix != len; ix++)
9810 {
9811 tree parm = TREE_VEC_ELT (vec, ix);
9812 tree decl = TREE_VALUE (parm);
9813
9814 gcc_checking_assert (DECL_TEMPLATE_PARM_P (decl));
9815 if (CHECKING_P)
9816 switch (TREE_CODE (decl))
9817 {
9818 default: gcc_unreachable ();
9819
9820 case TEMPLATE_DECL:
9821 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TEMPLATE_PARM)
9822 && (TREE_CODE (DECL_TEMPLATE_RESULT (decl)) == TYPE_DECL)
9823 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
9824 break;
9825
9826 case TYPE_DECL:
9827 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TYPE_PARM)
9828 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
9829 break;
9830
9831 case PARM_DECL:
9832 gcc_assert ((TREE_CODE (DECL_INITIAL (decl)) == TEMPLATE_PARM_INDEX)
9833 && (TREE_CODE (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))
9834 == CONST_DECL)
9835 && (DECL_TEMPLATE_PARM_P
9836 (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))));
9837 break;
9838 }
9839
9840 tree_node (decl);
9841 tree_node (TEMPLATE_PARM_CONSTRAINTS (parm));
9842 }
9843
9844 tpl_levels++;
9845 }
9846
9847 tree
9848 trees_in::tpl_parms (unsigned &tpl_levels)
9849 {
9850 tree parms = NULL_TREE;
9851
9852 while (int len = i ())
9853 {
9854 if (len < 0)
9855 {
9856 parms = back_ref (len);
9857 continue;
9858 }
9859
9860 len -= 1;
9861 parms = tree_cons (NULL_TREE, NULL_TREE, parms);
9862 int tag = insert (parms);
9863 TREE_PURPOSE (parms) = tree_node ();
9864
9865 dump (dumper::TREE)
9866 && dump ("Reading template parms:%d level:%N length:%d",
9867 tag, TREE_PURPOSE (parms), len);
9868
9869 tree vec = make_tree_vec (len);
9870 for (int ix = 0; ix != len; ix++)
9871 {
9872 tree decl = tree_node ();
9873 if (!decl)
9874 return NULL_TREE;
9875
9876 tree parm = build_tree_list (NULL, decl);
9877 TEMPLATE_PARM_CONSTRAINTS (parm) = tree_node ();
9878
9879 TREE_VEC_ELT (vec, ix) = parm;
9880 }
9881
9882 TREE_VALUE (parms) = vec;
9883 tpl_levels++;
9884 }
9885
9886 return parms;
9887 }
9888
9889 void
9890 trees_out::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
9891 {
9892 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
9893 tpl_levels--; parms = TREE_CHAIN (parms))
9894 {
9895 tree vec = TREE_VALUE (parms);
9896
9897 tree_node (TREE_TYPE (vec));
9898 tree dflt = error_mark_node;
9899 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
9900 {
9901 tree parm = TREE_VEC_ELT (vec, ix);
9902 if (dflt)
9903 {
9904 dflt = TREE_PURPOSE (parm);
9905 tree_node (dflt);
9906 }
9907
9908 if (streaming_p ())
9909 {
9910 tree decl = TREE_VALUE (parm);
9911 if (TREE_CODE (decl) == TEMPLATE_DECL)
9912 {
9913 tree ctx = DECL_CONTEXT (decl);
9914 tree inner = DECL_TEMPLATE_RESULT (decl);
9915 tree tpi = (TREE_CODE (inner) == TYPE_DECL
9916 ? TEMPLATE_TYPE_PARM_INDEX (TREE_TYPE (decl))
9917 : DECL_INITIAL (inner));
9918 bool original = (TEMPLATE_PARM_LEVEL (tpi)
9919 == TEMPLATE_PARM_ORIG_LEVEL (tpi));
9920 /* Original template template parms have a context
9921 of their owning template. Reduced ones do not. */
9922 gcc_checking_assert (original ? ctx == tmpl : !ctx);
9923 }
9924 }
9925 }
9926 }
9927 }
9928
9929 bool
9930 trees_in::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
9931 {
9932 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
9933 tpl_levels--; parms = TREE_CHAIN (parms))
9934 {
9935 tree vec = TREE_VALUE (parms);
9936 tree dflt = error_mark_node;
9937
9938 TREE_TYPE (vec) = tree_node ();
9939 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
9940 {
9941 tree parm = TREE_VEC_ELT (vec, ix);
9942 if (dflt)
9943 {
9944 dflt = tree_node ();
9945 if (get_overrun ())
9946 return false;
9947 TREE_PURPOSE (parm) = dflt;
9948 }
9949
9950 tree decl = TREE_VALUE (parm);
9951 if (TREE_CODE (decl) == TEMPLATE_DECL)
9952 {
9953 tree inner = DECL_TEMPLATE_RESULT (decl);
9954 tree tpi = (TREE_CODE (inner) == TYPE_DECL
9955 ? TEMPLATE_TYPE_PARM_INDEX (TREE_TYPE (decl))
9956 : DECL_INITIAL (inner));
9957 bool original = (TEMPLATE_PARM_LEVEL (tpi)
9958 == TEMPLATE_PARM_ORIG_LEVEL (tpi));
9959 /* Original template template parms have a context
9960 of their owning template. Reduced ones do not. */
9961 if (original)
9962 DECL_CONTEXT (decl) = tmpl;
9963 }
9964 }
9965 }
9966 return true;
9967 }
9968
9969 /* PARMS is a LIST, one node per level.
9970 TREE_VALUE is a TREE_VEC of parm info for that level.
9971 each ELT is a TREE_LIST
9972 TREE_VALUE is PARM_DECL, TYPE_DECL or TEMPLATE_DECL
9973 TREE_PURPOSE is the default value. */
9974
9975 void
9976 trees_out::tpl_header (tree tpl, unsigned *tpl_levels)
9977 {
9978 tree parms = DECL_TEMPLATE_PARMS (tpl);
9979 tpl_parms (parms, *tpl_levels);
9980
9981 /* Mark end. */
9982 if (streaming_p ())
9983 u (0);
9984
9985 if (*tpl_levels)
9986 tree_node (TEMPLATE_PARMS_CONSTRAINTS (parms));
9987 }
9988
9989 bool
9990 trees_in::tpl_header (tree tpl, unsigned *tpl_levels)
9991 {
9992 tree parms = tpl_parms (*tpl_levels);
9993 if (!parms)
9994 return false;
9995
9996 DECL_TEMPLATE_PARMS (tpl) = parms;
9997
9998 if (*tpl_levels)
9999 TEMPLATE_PARMS_CONSTRAINTS (parms) = tree_node ();
10000
10001 return true;
10002 }
10003
10004 /* Stream skeleton parm nodes, with their flags, type & parm indices.
10005 All the parms will have consecutive tags. */
10006
10007 void
10008 trees_out::fn_parms_init (tree fn)
10009 {
10010 /* First init them. */
10011 int base_tag = ref_num - 1;
10012 int ix = 0;
10013 for (tree parm = DECL_ARGUMENTS (fn);
10014 parm; parm = DECL_CHAIN (parm), ix++)
10015 {
10016 if (streaming_p ())
10017 {
10018 start (parm);
10019 tree_node_bools (parm);
10020 }
10021 int tag = insert (parm);
10022 gcc_checking_assert (base_tag - ix == tag);
10023 }
10024 /* Mark the end. */
10025 if (streaming_p ())
10026 u (0);
10027
10028 /* Now stream their contents. */
10029 ix = 0;
10030 for (tree parm = DECL_ARGUMENTS (fn);
10031 parm; parm = DECL_CHAIN (parm), ix++)
10032 {
10033 if (streaming_p ())
10034 dump (dumper::TREE)
10035 && dump ("Writing parm:%d %u (%N) of %N",
10036 base_tag - ix, ix, parm, fn);
10037 tree_node_vals (parm);
10038 }
10039 }
10040
10041 /* Build skeleton parm nodes, read their flags, type & parm indices. */
10042
10043 int
10044 trees_in::fn_parms_init (tree fn)
10045 {
10046 int base_tag = ~(int)back_refs.length ();
10047
10048 tree *parm_ptr = &DECL_ARGUMENTS (fn);
10049 int ix = 0;
10050 for (; int code = u (); ix++)
10051 {
10052 tree parm = start (code);
10053 if (!tree_node_bools (parm))
10054 return 0;
10055
10056 int tag = insert (parm);
10057 gcc_checking_assert (base_tag - ix == tag);
10058 *parm_ptr = parm;
10059 parm_ptr = &DECL_CHAIN (parm);
10060 }
10061
10062 ix = 0;
10063 for (tree parm = DECL_ARGUMENTS (fn);
10064 parm; parm = DECL_CHAIN (parm), ix++)
10065 {
10066 dump (dumper::TREE)
10067 && dump ("Reading parm:%d %u (%N) of %N",
10068 base_tag - ix, ix, parm, fn);
10069 if (!tree_node_vals (parm))
10070 return 0;
10071 }
10072
10073 return base_tag;
10074 }
10075
10076 /* Read the remaining parm node data. Replace with existing (if
10077 non-null) in the map. */
10078
10079 void
10080 trees_in::fn_parms_fini (int tag, tree fn, tree existing, bool is_defn)
10081 {
10082 tree existing_parm = existing ? DECL_ARGUMENTS (existing) : NULL_TREE;
10083 tree parms = DECL_ARGUMENTS (fn);
10084 unsigned ix = 0;
10085 for (tree parm = parms; parm; parm = DECL_CHAIN (parm), ix++)
10086 {
10087 if (existing_parm)
10088 {
10089 if (is_defn && !DECL_SAVED_TREE (existing))
10090 {
10091 /* If we're about to become the definition, set the
10092 names of the parms from us. */
10093 DECL_NAME (existing_parm) = DECL_NAME (parm);
10094 DECL_SOURCE_LOCATION (existing_parm) = DECL_SOURCE_LOCATION (parm);
10095 }
10096
10097 back_refs[~tag] = existing_parm;
10098 existing_parm = DECL_CHAIN (existing_parm);
10099 }
10100 tag--;
10101 }
10102 }
10103
10104 /* DEP is the depset of some decl we're streaming by value. Determine
10105 the merging behaviour. */
10106
10107 merge_kind
10108 trees_out::get_merge_kind (tree decl, depset *dep)
10109 {
10110 if (!dep)
10111 {
10112 if (VAR_OR_FUNCTION_DECL_P (decl))
10113 {
10114 /* Any var or function with template info should have DEP. */
10115 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
10116 || !DECL_TEMPLATE_INFO (decl));
10117 if (DECL_LOCAL_DECL_P (decl))
10118 return MK_unique;
10119 }
10120
10121 /* Either unique, or some member of a class that cannot have an
10122 out-of-class definition. For instance a FIELD_DECL. */
10123 tree ctx = CP_DECL_CONTEXT (decl);
10124 if (TREE_CODE (ctx) == FUNCTION_DECL)
10125 {
10126 /* USING_DECLs cannot have DECL_TEMPLATE_INFO -- this isn't
10127 permitting them to have one. */
10128 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
10129 || !DECL_LANG_SPECIFIC (decl)
10130 || !DECL_TEMPLATE_INFO (decl));
10131
10132 return MK_unique;
10133 }
10134
10135 if (TREE_CODE (decl) == TEMPLATE_DECL
10136 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10137 return MK_local_friend;
10138
10139 gcc_checking_assert (TYPE_P (ctx));
10140 if (TREE_CODE (decl) == USING_DECL)
10141 return MK_field;
10142
10143 if (TREE_CODE (decl) == FIELD_DECL)
10144 {
10145 if (DECL_NAME (decl))
10146 {
10147 /* Anonymous FIELD_DECLs have a NULL name. */
10148 gcc_checking_assert (!IDENTIFIER_ANON_P (DECL_NAME (decl)));
10149 return MK_named;
10150 }
10151
10152 if (!DECL_NAME (decl)
10153 && !RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
10154 && !DECL_BIT_FIELD_REPRESENTATIVE (decl))
10155 {
10156 /* The underlying storage unit for a bitfield. We do not
10157 need to dedup it, because it's only reachable through
10158 the bitfields it represents. And those are deduped. */
10159 // FIXME: Is that assertion correct -- do we ever fish it
10160 // out and put it in an expr?
10161 gcc_checking_assert ((TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE
10162 ? TREE_CODE (TREE_TYPE (TREE_TYPE (decl)))
10163 : TREE_CODE (TREE_TYPE (decl)))
10164 == INTEGER_TYPE);
10165 return MK_unique;
10166 }
10167
10168 return MK_field;
10169 }
10170
10171 if (TREE_CODE (decl) == CONST_DECL)
10172 return MK_named;
10173
10174 if (TREE_CODE (decl) == VAR_DECL
10175 && DECL_VTABLE_OR_VTT_P (decl))
10176 return MK_vtable;
10177
10178 if (DECL_THUNK_P (decl))
10179 /* Thunks are unique-enough, because they're only referenced
10180 from the vtable. And that's either new (so we want the
10181 thunks), or it's a duplicate (so it will be dropped). */
10182 return MK_unique;
10183
10184 /* There should be no other cases. */
10185 gcc_unreachable ();
10186 }
10187
10188 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
10189 && TREE_CODE (decl) != USING_DECL
10190 && TREE_CODE (decl) != CONST_DECL);
10191
10192 if (is_key_order ())
10193 {
10194 /* When doing the mergeablilty graph, there's an indirection to
10195 the actual depset. */
10196 gcc_assert (dep->is_special ());
10197 dep = dep->deps[0];
10198 }
10199
10200 gcc_checking_assert (decl == dep->get_entity ());
10201
10202 merge_kind mk = MK_named;
10203 switch (dep->get_entity_kind ())
10204 {
10205 default:
10206 gcc_unreachable ();
10207
10208 case depset::EK_PARTIAL:
10209 mk = MK_partial;
10210 break;
10211
10212 case depset::EK_DECL:
10213 {
10214 tree ctx = CP_DECL_CONTEXT (decl);
10215
10216 switch (TREE_CODE (ctx))
10217 {
10218 default:
10219 gcc_unreachable ();
10220
10221 case FUNCTION_DECL:
10222 // FIXME: This can occur for (a) voldemorty TYPE_DECLS
10223 // (which are returned from a function), or (b)
10224 // block-scope class definitions in template functions.
10225 // These are as unique as the containing function. While
10226 // on read-back we can discover if the CTX was a
10227 // duplicate, we don't have a mechanism to get from the
10228 // existing CTX to the existing version of this decl.
10229 gcc_checking_assert
10230 (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl)));
10231
10232 mk = MK_unique;
10233 break;
10234
10235 case RECORD_TYPE:
10236 case UNION_TYPE:
10237 if (DECL_NAME (decl) == as_base_identifier)
10238 mk = MK_as_base;
10239 else if (IDENTIFIER_ANON_P (DECL_NAME (decl)))
10240 mk = MK_field;
10241 break;
10242
10243 case NAMESPACE_DECL:
10244 if (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl))
10245 && LAMBDA_TYPE_P (TREE_TYPE (decl)))
10246 if (tree scope
10247 = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10248 (TREE_TYPE (decl))))
10249 if (TREE_CODE (scope) == VAR_DECL
10250 && DECL_MODULE_ATTACHMENTS_P (scope))
10251 {
10252 mk = MK_attached;
10253 break;
10254 }
10255
10256 if (TREE_CODE (decl) == TEMPLATE_DECL
10257 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10258 mk = MK_local_friend;
10259 else if (IDENTIFIER_ANON_P (DECL_NAME (decl)))
10260 {
10261 if (DECL_IMPLICIT_TYPEDEF_P (decl)
10262 && UNSCOPED_ENUM_P (TREE_TYPE (decl))
10263 && TYPE_VALUES (TREE_TYPE (decl)))
10264 /* Keyed by first enum value, and underlying type. */
10265 mk = MK_enum;
10266 else
10267 /* No way to merge it, it is an ODR land-mine. */
10268 mk = MK_unique;
10269 }
10270 }
10271 }
10272 break;
10273
10274 case depset::EK_SPECIALIZATION:
10275 {
10276 gcc_checking_assert (dep->is_special ());
10277 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10278
10279 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
10280 /* An block-scope classes of templates are themselves
10281 templates. */
10282 gcc_checking_assert (DECL_IMPLICIT_TYPEDEF_P (decl));
10283
10284 if (dep->is_friend_spec ())
10285 mk = MK_friend_spec;
10286 else if (dep->is_type_spec ())
10287 mk = MK_type_spec;
10288 else if (dep->is_alias ())
10289 mk = MK_alias_spec;
10290 else
10291 mk = MK_decl_spec;
10292
10293 if (TREE_CODE (decl) == TEMPLATE_DECL)
10294 {
10295 tree res = DECL_TEMPLATE_RESULT (decl);
10296 if (!(mk & MK_tmpl_decl_mask))
10297 res = TREE_TYPE (res);
10298
10299 if (res == entry->spec)
10300 /* We check we can get back to the template during
10301 streaming. */
10302 mk = merge_kind (mk | MK_tmpl_tmpl_mask);
10303 }
10304 }
10305 break;
10306 }
10307
10308 return mk;
10309 }
10310
10311
10312 /* The container of DECL -- not necessarily its context! */
10313
10314 tree
10315 trees_out::decl_container (tree decl)
10316 {
10317 int use_tpl;
10318 tree tpl = NULL_TREE;
10319 if (tree template_info = node_template_info (decl, use_tpl))
10320 tpl = TI_TEMPLATE (template_info);
10321 if (tpl == decl)
10322 tpl = nullptr;
10323
10324 /* Stream the template we're instantiated from. */
10325 tree_node (tpl);
10326
10327 tree container = NULL_TREE;
10328 if (TREE_CODE (decl) == TEMPLATE_DECL
10329 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10330 container = DECL_CHAIN (decl);
10331 else
10332 container = CP_DECL_CONTEXT (decl);
10333
10334 if (TYPE_P (container))
10335 container = TYPE_NAME (container);
10336
10337 tree_node (container);
10338
10339 return container;
10340 }
10341
10342 tree
10343 trees_in::decl_container ()
10344 {
10345 /* The maybe-template. */
10346 (void)tree_node ();
10347
10348 tree container = tree_node ();
10349
10350 return container;
10351 }
10352
10353 /* Write out key information about a mergeable DEP. Does not write
10354 the contents of DEP itself. The context has already been
10355 written. The container has already been streamed. */
10356
10357 void
10358 trees_out::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
10359 tree container, depset *dep)
10360 {
10361 if (dep && is_key_order ())
10362 {
10363 gcc_checking_assert (dep->is_special ());
10364 dep = dep->deps[0];
10365 }
10366
10367 if (streaming_p ())
10368 dump (dumper::MERGE)
10369 && dump ("Writing:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
10370 dep ? dep->entity_kind_name () : "contained",
10371 TREE_CODE (decl), decl);
10372
10373 /* Now write the locating information. */
10374 if (mk & MK_template_mask)
10375 {
10376 /* Specializations are located via their originating template,
10377 and the set of template args they specialize. */
10378 gcc_checking_assert (dep && dep->is_special ());
10379 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10380
10381 tree_node (entry->tmpl);
10382 tree_node (entry->args);
10383 if (streaming_p ())
10384 u (get_mergeable_specialization_flags (entry->tmpl, decl));
10385 if (mk & MK_tmpl_decl_mask)
10386 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
10387 {
10388 /* Variable template partial specializations might need
10389 constraints (see spec_hasher::equal). It's simpler to
10390 write NULL when we don't need them. */
10391 tree constraints = NULL_TREE;
10392
10393 if (uses_template_parms (entry->args))
10394 constraints = get_constraints (inner);
10395 tree_node (constraints);
10396 }
10397
10398 if (CHECKING_P)
10399 {
10400 /* Make sure we can locate the decl. */
10401 tree existing = match_mergeable_specialization
10402 (bool (mk & MK_tmpl_decl_mask), entry, false);
10403
10404 gcc_assert (existing);
10405 if (mk & MK_tmpl_decl_mask)
10406 {
10407 if (mk & MK_tmpl_alias_mask)
10408 /* It should be in both tables. */
10409 gcc_assert (match_mergeable_specialization (false, entry, false)
10410 == TREE_TYPE (existing));
10411 else if (mk & MK_tmpl_tmpl_mask)
10412 if (tree ti = DECL_TEMPLATE_INFO (existing))
10413 existing = TI_TEMPLATE (ti);
10414 }
10415 else
10416 {
10417 if (!(mk & MK_tmpl_tmpl_mask))
10418 existing = TYPE_NAME (existing);
10419 else if (tree ti = CLASSTYPE_TEMPLATE_INFO (existing))
10420 existing = TI_TEMPLATE (ti);
10421 }
10422
10423 /* The walkabout should have found ourselves. */
10424 gcc_assert (existing == decl);
10425 }
10426 }
10427 else if (mk != MK_unique)
10428 {
10429 merge_key key;
10430 tree name = DECL_NAME (decl);
10431
10432 switch (mk)
10433 {
10434 default:
10435 gcc_unreachable ();
10436
10437 case MK_named:
10438 case MK_friend_spec:
10439 if (IDENTIFIER_CONV_OP_P (name))
10440 name = conv_op_identifier;
10441
10442 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
10443 {
10444 /* Functions are distinguished by parameter types. */
10445 tree fn_type = TREE_TYPE (inner);
10446
10447 key.ref_q = type_memfn_rqual (fn_type);
10448 key.args = TYPE_ARG_TYPES (fn_type);
10449
10450 if (tree reqs = get_constraints (inner))
10451 {
10452 if (cxx_dialect < cxx20)
10453 reqs = CI_ASSOCIATED_CONSTRAINTS (reqs);
10454 else
10455 reqs = CI_DECLARATOR_REQS (reqs);
10456 key.constraints = reqs;
10457 }
10458
10459 if (IDENTIFIER_CONV_OP_P (name)
10460 || (decl != inner
10461 && !(name == fun_identifier
10462 /* In case the user names something _FUN */
10463 && LAMBDA_TYPE_P (DECL_CONTEXT (inner)))))
10464 /* And a function template, or conversion operator needs
10465 the return type. Except for the _FUN thunk of a
10466 generic lambda, which has a recursive decl_type'd
10467 return type. */
10468 // FIXME: What if the return type is a voldemort?
10469 key.ret = fndecl_declared_return_type (inner);
10470 }
10471
10472 if (mk == MK_friend_spec)
10473 {
10474 gcc_checking_assert (dep && dep->is_special ());
10475 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10476
10477 tree_node (entry->tmpl);
10478 tree_node (entry->args);
10479 if (streaming_p ())
10480 u (get_mergeable_specialization_flags (entry->tmpl, decl));
10481 }
10482 break;
10483
10484 case MK_field:
10485 {
10486 unsigned ix = 0;
10487 if (TREE_CODE (inner) != FIELD_DECL)
10488 name = NULL_TREE;
10489 else
10490 gcc_checking_assert (!name || !IDENTIFIER_ANON_P (name));
10491
10492 for (tree field = TYPE_FIELDS (TREE_TYPE (container));
10493 ; field = DECL_CHAIN (field))
10494 {
10495 tree finner = STRIP_TEMPLATE (field);
10496 if (TREE_CODE (finner) == TREE_CODE (inner))
10497 {
10498 if (finner == inner)
10499 break;
10500 ix++;
10501 }
10502 }
10503 key.index = ix;
10504 }
10505 break;
10506
10507 case MK_vtable:
10508 {
10509 tree vtable = CLASSTYPE_VTABLES (TREE_TYPE (container));
10510 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
10511 if (vtable == decl)
10512 {
10513 key.index = ix;
10514 break;
10515 }
10516 name = NULL_TREE;
10517 }
10518 break;
10519
10520 case MK_as_base:
10521 gcc_checking_assert
10522 (decl == TYPE_NAME (CLASSTYPE_AS_BASE (TREE_TYPE (container))));
10523 break;
10524
10525 case MK_local_friend:
10526 {
10527 /* Find by index on the class's DECL_LIST */
10528 unsigned ix = 0;
10529 for (tree decls = CLASSTYPE_DECL_LIST (TREE_CHAIN (decl));
10530 decls; decls = TREE_CHAIN (decls))
10531 if (!TREE_PURPOSE (decls))
10532 {
10533 tree frnd = friend_from_decl_list (TREE_VALUE (decls));
10534 if (frnd == decl)
10535 break;
10536 ix++;
10537 }
10538 key.index = ix;
10539 name = NULL_TREE;
10540 }
10541 break;
10542
10543 case MK_enum:
10544 {
10545 /* Anonymous enums are located by their first identifier,
10546 and underlying type. */
10547 tree type = TREE_TYPE (decl);
10548
10549 gcc_checking_assert (UNSCOPED_ENUM_P (type));
10550 /* Using the type name drops the bit precision we might
10551 have been using on the enum. */
10552 key.ret = TYPE_NAME (ENUM_UNDERLYING_TYPE (type));
10553 if (tree values = TYPE_VALUES (type))
10554 name = DECL_NAME (TREE_VALUE (values));
10555 }
10556 break;
10557
10558 case MK_attached:
10559 {
10560 gcc_checking_assert (LAMBDA_TYPE_P (TREE_TYPE (inner)));
10561 tree scope = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10562 (TREE_TYPE (inner)));
10563 gcc_checking_assert (TREE_CODE (scope) == VAR_DECL);
10564 attachset *root = attached_table->get (DECL_UID (scope));
10565 unsigned ix = root->num;
10566 /* If we don't find it, we'll write a really big number
10567 that the reader will ignore. */
10568 while (ix--)
10569 if (root->values[ix] == inner)
10570 break;
10571
10572 /* Use the attached-to decl as the 'name'. */
10573 name = scope;
10574 key.index = ix;
10575 }
10576 break;
10577
10578 case MK_partial:
10579 {
10580 key.constraints = get_constraints (inner);
10581 key.ret = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (inner));
10582 key.args = CLASSTYPE_TI_ARGS (TREE_TYPE (inner));
10583 }
10584 break;
10585 }
10586
10587 tree_node (name);
10588 if (streaming_p ())
10589 {
10590 unsigned code = (key.ref_q << 0) | (key.index << 2);
10591 u (code);
10592 }
10593
10594 if (mk == MK_enum)
10595 tree_node (key.ret);
10596 else if (mk == MK_partial
10597 || (mk == MK_named && inner
10598 && TREE_CODE (inner) == FUNCTION_DECL))
10599 {
10600 tree_node (key.ret);
10601 tree arg = key.args;
10602 if (mk == MK_named)
10603 while (arg && arg != void_list_node)
10604 {
10605 tree_node (TREE_VALUE (arg));
10606 arg = TREE_CHAIN (arg);
10607 }
10608 tree_node (arg);
10609 tree_node (key.constraints);
10610 }
10611 }
10612 }
10613
10614 /* DECL is a new declaration that may be duplicated in OVL. Use RET &
10615 ARGS to find its clone, or NULL. If DECL's DECL_NAME is NULL, this
10616 has been found by a proxy. It will be an enum type located by it's
10617 first member.
10618
10619 We're conservative with matches, so ambiguous decls will be
10620 registered as different, then lead to a lookup error if the two
10621 modules are both visible. Perhaps we want to do something similar
10622 to duplicate decls to get ODR errors on loading? We already have
10623 some special casing for namespaces. */
10624
10625 static tree
10626 check_mergeable_decl (merge_kind mk, tree decl, tree ovl, merge_key const &key)
10627 {
10628 tree found = NULL_TREE;
10629 for (ovl_iterator iter (ovl); !found && iter; ++iter)
10630 {
10631 tree match = *iter;
10632
10633 tree d_inner = decl;
10634 tree m_inner = match;
10635
10636 again:
10637 if (TREE_CODE (d_inner) != TREE_CODE (m_inner))
10638 {
10639 if (TREE_CODE (match) == NAMESPACE_DECL
10640 && !DECL_NAMESPACE_ALIAS (match))
10641 /* Namespaces are never overloaded. */
10642 found = match;
10643
10644 continue;
10645 }
10646
10647 switch (TREE_CODE (d_inner))
10648 {
10649 case TEMPLATE_DECL:
10650 if (template_heads_equivalent_p (d_inner, m_inner))
10651 {
10652 d_inner = DECL_TEMPLATE_RESULT (d_inner);
10653 m_inner = DECL_TEMPLATE_RESULT (m_inner);
10654 if (d_inner == error_mark_node
10655 && TYPE_DECL_ALIAS_P (m_inner))
10656 {
10657 found = match;
10658 break;
10659 }
10660 goto again;
10661 }
10662 break;
10663
10664 case FUNCTION_DECL:
10665 map_context_from = d_inner;
10666 map_context_to = m_inner;
10667 if (tree m_type = TREE_TYPE (m_inner))
10668 if ((!key.ret
10669 || same_type_p (key.ret, fndecl_declared_return_type (m_inner)))
10670 && type_memfn_rqual (m_type) == key.ref_q
10671 && compparms (key.args, TYPE_ARG_TYPES (m_type))
10672 /* Reject if old is a "C" builtin and new is not "C".
10673 Matches decls_match behaviour. */
10674 && (!DECL_IS_UNDECLARED_BUILTIN (m_inner)
10675 || !DECL_EXTERN_C_P (m_inner)
10676 || DECL_EXTERN_C_P (d_inner)))
10677 {
10678 tree m_reqs = get_constraints (m_inner);
10679 if (m_reqs)
10680 {
10681 if (cxx_dialect < cxx20)
10682 m_reqs = CI_ASSOCIATED_CONSTRAINTS (m_reqs);
10683 else
10684 m_reqs = CI_DECLARATOR_REQS (m_reqs);
10685 }
10686
10687 if (cp_tree_equal (key.constraints, m_reqs))
10688 found = match;
10689 }
10690 map_context_from = map_context_to = NULL_TREE;
10691 break;
10692
10693 case TYPE_DECL:
10694 if (DECL_IMPLICIT_TYPEDEF_P (d_inner)
10695 == DECL_IMPLICIT_TYPEDEF_P (m_inner))
10696 {
10697 if (!IDENTIFIER_ANON_P (DECL_NAME (m_inner)))
10698 return match;
10699 else if (mk == MK_enum
10700 && (TYPE_NAME (ENUM_UNDERLYING_TYPE (TREE_TYPE (m_inner)))
10701 == key.ret))
10702 found = match;
10703 }
10704 break;
10705
10706 default:
10707 found = match;
10708 break;
10709 }
10710 }
10711
10712 return found;
10713 }
10714
10715 /* DECL, INNER & TYPE are a skeleton set of nodes for a decl. Only
10716 the bools have been filled in. Read its merging key and merge it.
10717 Returns the existing decl if there is one. */
10718
10719 tree
10720 trees_in::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
10721 tree type, tree container, bool is_mod)
10722 {
10723 const char *kind = "new";
10724 tree existing = NULL_TREE;
10725
10726 if (mk & MK_template_mask)
10727 {
10728 spec_entry spec;
10729 spec.tmpl = tree_node ();
10730 spec.args = tree_node ();
10731 unsigned flags = u ();
10732
10733 DECL_NAME (decl) = DECL_NAME (spec.tmpl);
10734 DECL_CONTEXT (decl) = DECL_CONTEXT (spec.tmpl);
10735 DECL_NAME (inner) = DECL_NAME (decl);
10736 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
10737
10738 spec.spec = decl;
10739 if (mk & MK_tmpl_tmpl_mask)
10740 {
10741 if (inner == decl)
10742 return error_mark_node;
10743 spec.spec = inner;
10744 }
10745 tree constr = NULL_TREE;
10746 bool is_decl = mk & MK_tmpl_decl_mask;
10747 if (is_decl)
10748 {
10749 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
10750 {
10751 constr = tree_node ();
10752 if (constr)
10753 set_constraints (inner, constr);
10754 }
10755 }
10756 else
10757 {
10758 if (mk == MK_type_spec && inner != decl)
10759 return error_mark_node;
10760 spec.spec = type;
10761 }
10762 existing = match_mergeable_specialization (is_decl, &spec);
10763 if (constr)
10764 /* We'll add these back later, if this is the new decl. */
10765 remove_constraints (inner);
10766
10767 if (!existing)
10768 add_mergeable_specialization (spec.tmpl, spec.args, decl, flags);
10769 else if (mk & MK_tmpl_decl_mask)
10770 {
10771 /* A declaration specialization. */
10772 if (mk & MK_tmpl_tmpl_mask)
10773 if (tree ti = DECL_TEMPLATE_INFO (existing))
10774 {
10775 tree tmpl = TI_TEMPLATE (ti);
10776 if (DECL_TEMPLATE_RESULT (tmpl) == existing)
10777 existing = tmpl;
10778 }
10779 }
10780 else
10781 {
10782 /* A type specialization. */
10783 if (!(mk & MK_tmpl_tmpl_mask))
10784 existing = TYPE_NAME (existing);
10785 else if (tree ti = CLASSTYPE_TEMPLATE_INFO (existing))
10786 {
10787 tree tmpl = TI_TEMPLATE (ti);
10788 if (DECL_TEMPLATE_RESULT (tmpl) == TYPE_NAME (existing))
10789 existing = tmpl;
10790 }
10791 }
10792 }
10793 else if (mk == MK_unique)
10794 kind = "unique";
10795 else
10796 {
10797 tree name = tree_node ();
10798
10799 merge_key key;
10800 unsigned code = u ();
10801 key.ref_q = cp_ref_qualifier ((code >> 0) & 3);
10802 key.index = code >> 2;
10803
10804 if (mk == MK_enum)
10805 key.ret = tree_node ();
10806 else if (mk == MK_partial
10807 || ((mk == MK_named || mk == MK_friend_spec)
10808 && inner && TREE_CODE (inner) == FUNCTION_DECL))
10809 {
10810 key.ret = tree_node ();
10811 tree arg, *arg_ptr = &key.args;
10812 while ((arg = tree_node ())
10813 && arg != void_list_node
10814 && mk != MK_partial)
10815 {
10816 *arg_ptr = tree_cons (NULL_TREE, arg, NULL_TREE);
10817 arg_ptr = &TREE_CHAIN (*arg_ptr);
10818 }
10819 *arg_ptr = arg;
10820 key.constraints = tree_node ();
10821 }
10822
10823 if (get_overrun ())
10824 return error_mark_node;
10825
10826 if (mk < MK_indirect_lwm)
10827 {
10828 DECL_NAME (decl) = name;
10829 DECL_CONTEXT (decl) = FROB_CONTEXT (container);
10830 }
10831 if (inner)
10832 {
10833 DECL_NAME (inner) = DECL_NAME (decl);
10834 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
10835 }
10836
10837 if (mk == MK_partial)
10838 {
10839 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (key.ret);
10840 spec; spec = TREE_CHAIN (spec))
10841 {
10842 tree tmpl = TREE_VALUE (spec);
10843 if (template_args_equal (key.args,
10844 CLASSTYPE_TI_ARGS (TREE_TYPE (tmpl)))
10845 && cp_tree_equal (key.constraints,
10846 get_constraints
10847 (DECL_TEMPLATE_RESULT (tmpl))))
10848 {
10849 existing = tmpl;
10850 break;
10851 }
10852 }
10853 if (!existing)
10854 add_mergeable_specialization (key.ret, key.args, decl, 2);
10855 }
10856 else
10857 switch (TREE_CODE (container))
10858 {
10859 default:
10860 gcc_unreachable ();
10861
10862 case NAMESPACE_DECL:
10863 if (mk == MK_attached)
10864 {
10865 if (DECL_LANG_SPECIFIC (name)
10866 && VAR_OR_FUNCTION_DECL_P (name)
10867 && DECL_MODULE_ATTACHMENTS_P (name))
10868 if (attachset *set = attached_table->get (DECL_UID (name)))
10869 if (key.index < set->num)
10870 {
10871 existing = set->values[key.index];
10872 if (existing)
10873 {
10874 gcc_checking_assert
10875 (DECL_IMPLICIT_TYPEDEF_P (existing));
10876 if (inner != decl)
10877 existing
10878 = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (existing));
10879 }
10880 }
10881 }
10882 else if (is_mod && !(state->is_module () || state->is_partition ()))
10883 kind = "unique";
10884 else
10885 {
10886 gcc_checking_assert (mk == MK_named || mk == MK_enum);
10887 tree mvec;
10888 tree *vslot = mergeable_namespace_slots (container, name,
10889 !is_mod, &mvec);
10890 existing = check_mergeable_decl (mk, decl, *vslot, key);
10891 if (!existing)
10892 add_mergeable_namespace_entity (vslot, decl);
10893 else
10894 {
10895 /* Note that we now have duplicates to deal with in
10896 name lookup. */
10897 if (is_mod)
10898 BINDING_VECTOR_PARTITION_DUPS_P (mvec) = true;
10899 else
10900 BINDING_VECTOR_GLOBAL_DUPS_P (mvec) = true;
10901 }
10902 }
10903 break;
10904
10905 case FUNCTION_DECL:
10906 // FIXME: What about a voldemort? how do we find what it
10907 // duplicates? Do we have to number vmorts relative to
10908 // their containing function? But how would that work
10909 // when matching an in-TU declaration?
10910 kind = "unique";
10911 break;
10912
10913 case TYPE_DECL:
10914 if (is_mod && !(state->is_module () || state->is_partition ())
10915 /* Implicit member functions can come from
10916 anywhere. */
10917 && !(DECL_ARTIFICIAL (decl)
10918 && TREE_CODE (decl) == FUNCTION_DECL
10919 && !DECL_THUNK_P (decl)))
10920 kind = "unique";
10921 else
10922 {
10923 tree ctx = TREE_TYPE (container);
10924
10925 /* For some reason templated enumeral types are not marked
10926 as COMPLETE_TYPE_P, even though they have members.
10927 This may well be a bug elsewhere. */
10928 if (TREE_CODE (ctx) == ENUMERAL_TYPE)
10929 existing = find_enum_member (ctx, name);
10930 else if (COMPLETE_TYPE_P (ctx))
10931 {
10932 switch (mk)
10933 {
10934 default:
10935 gcc_unreachable ();
10936
10937 case MK_named:
10938 existing = lookup_class_binding (ctx, name);
10939 if (existing)
10940 {
10941 tree inner = decl;
10942 if (TREE_CODE (inner) == TEMPLATE_DECL
10943 && !DECL_MEMBER_TEMPLATE_P (inner))
10944 inner = DECL_TEMPLATE_RESULT (inner);
10945
10946 existing = check_mergeable_decl
10947 (mk, inner, existing, key);
10948
10949 if (!existing && DECL_ALIAS_TEMPLATE_P (decl))
10950 {} // FIXME: Insert into specialization
10951 // tables, we'll need the arguments for that!
10952 }
10953 break;
10954
10955 case MK_field:
10956 {
10957 unsigned ix = key.index;
10958 for (tree field = TYPE_FIELDS (ctx);
10959 field; field = DECL_CHAIN (field))
10960 {
10961 tree finner = STRIP_TEMPLATE (field);
10962 if (TREE_CODE (finner) == TREE_CODE (inner))
10963 if (!ix--)
10964 {
10965 existing = field;
10966 break;
10967 }
10968 }
10969 }
10970 break;
10971
10972 case MK_vtable:
10973 {
10974 unsigned ix = key.index;
10975 for (tree vtable = CLASSTYPE_VTABLES (ctx);
10976 vtable; vtable = DECL_CHAIN (vtable))
10977 if (!ix--)
10978 {
10979 existing = vtable;
10980 break;
10981 }
10982 }
10983 break;
10984
10985 case MK_as_base:
10986 {
10987 tree as_base = CLASSTYPE_AS_BASE (ctx);
10988 if (as_base && as_base != ctx)
10989 existing = TYPE_NAME (as_base);
10990 }
10991 break;
10992
10993 case MK_local_friend:
10994 {
10995 unsigned ix = key.index;
10996 for (tree decls = CLASSTYPE_DECL_LIST (ctx);
10997 decls; decls = TREE_CHAIN (decls))
10998 if (!TREE_PURPOSE (decls) && !ix--)
10999 {
11000 existing
11001 = friend_from_decl_list (TREE_VALUE (decls));
11002 break;
11003 }
11004 }
11005 break;
11006 }
11007
11008 if (existing && mk < MK_indirect_lwm && mk != MK_partial
11009 && TREE_CODE (decl) == TEMPLATE_DECL
11010 && !DECL_MEMBER_TEMPLATE_P (decl))
11011 {
11012 tree ti;
11013 if (DECL_IMPLICIT_TYPEDEF_P (existing))
11014 ti = TYPE_TEMPLATE_INFO (TREE_TYPE (existing));
11015 else
11016 ti = DECL_TEMPLATE_INFO (existing);
11017 existing = TI_TEMPLATE (ti);
11018 }
11019 }
11020 }
11021 }
11022
11023 if (mk == MK_friend_spec)
11024 {
11025 spec_entry spec;
11026 spec.tmpl = tree_node ();
11027 spec.args = tree_node ();
11028 spec.spec = decl;
11029 unsigned flags = u ();
11030
11031 tree e = match_mergeable_specialization (true, &spec);
11032 if (!e)
11033 add_mergeable_specialization (spec.tmpl, spec.args,
11034 existing ? existing : decl, flags);
11035 else if (e != existing)
11036 set_overrun ();
11037 }
11038 }
11039
11040 dump (dumper::MERGE)
11041 && dump ("Read:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
11042 existing ? "matched" : kind, TREE_CODE (decl), decl);
11043
11044 return existing;
11045 }
11046
11047 void
11048 trees_out::binfo_mergeable (tree binfo)
11049 {
11050 tree dom = binfo;
11051 while (tree parent = BINFO_INHERITANCE_CHAIN (dom))
11052 dom = parent;
11053 tree type = BINFO_TYPE (dom);
11054 gcc_checking_assert (TYPE_BINFO (type) == dom);
11055 tree_node (type);
11056 if (streaming_p ())
11057 {
11058 unsigned ix = 0;
11059 for (; dom != binfo; dom = TREE_CHAIN (dom))
11060 ix++;
11061 u (ix);
11062 }
11063 }
11064
11065 unsigned
11066 trees_in::binfo_mergeable (tree *type)
11067 {
11068 *type = tree_node ();
11069 return u ();
11070 }
11071
11072 /* DECL is a just streamed mergeable decl that should match EXISTING. Check
11073 it does and issue an appropriate diagnostic if not. Merge any
11074 bits from DECL to EXISTING. This is stricter matching than
11075 decls_match, because we can rely on ODR-sameness, and we cannot use
11076 decls_match because it can cause instantiations of constraints. */
11077
11078 bool
11079 trees_in::is_matching_decl (tree existing, tree decl)
11080 {
11081 // FIXME: We should probably do some duplicate decl-like stuff here
11082 // (beware, default parms should be the same?) Can we just call
11083 // duplicate_decls and teach it how to handle the module-specific
11084 // permitted/required duplications?
11085
11086 // We know at this point that the decls have matched by key, so we
11087 // can elide some of the checking
11088 gcc_checking_assert (TREE_CODE (existing) == TREE_CODE (decl));
11089
11090 tree inner = decl;
11091 if (TREE_CODE (decl) == TEMPLATE_DECL)
11092 {
11093 inner = DECL_TEMPLATE_RESULT (decl);
11094 gcc_checking_assert (TREE_CODE (DECL_TEMPLATE_RESULT (existing))
11095 == TREE_CODE (inner));
11096 }
11097
11098 gcc_checking_assert (!map_context_from);
11099 /* This mapping requres the new decl on the lhs and the existing
11100 entity on the rhs of the comparitors below. */
11101 map_context_from = inner;
11102 map_context_to = STRIP_TEMPLATE (existing);
11103
11104 if (TREE_CODE (inner) == FUNCTION_DECL)
11105 {
11106 tree e_ret = fndecl_declared_return_type (existing);
11107 tree d_ret = fndecl_declared_return_type (decl);
11108
11109 if (decl != inner && DECL_NAME (inner) == fun_identifier
11110 && LAMBDA_TYPE_P (DECL_CONTEXT (inner)))
11111 /* This has a recursive type that will compare different. */;
11112 else if (!same_type_p (d_ret, e_ret))
11113 goto mismatch;
11114
11115 tree e_type = TREE_TYPE (existing);
11116 tree d_type = TREE_TYPE (decl);
11117
11118 if (DECL_EXTERN_C_P (decl) != DECL_EXTERN_C_P (existing))
11119 goto mismatch;
11120
11121 for (tree e_args = TYPE_ARG_TYPES (e_type),
11122 d_args = TYPE_ARG_TYPES (d_type);
11123 e_args != d_args && (e_args || d_args);
11124 e_args = TREE_CHAIN (e_args), d_args = TREE_CHAIN (d_args))
11125 {
11126 if (!(e_args && d_args))
11127 goto mismatch;
11128
11129 if (!same_type_p (TREE_VALUE (d_args), TREE_VALUE (e_args)))
11130 goto mismatch;
11131
11132 // FIXME: Check default values
11133 }
11134
11135 /* If EXISTING has an undeduced or uninstantiated exception
11136 specification, but DECL does not, propagate the exception
11137 specification. Otherwise we end up asserting or trying to
11138 instantiate it in the middle of loading. */
11139 tree e_spec = TYPE_RAISES_EXCEPTIONS (e_type);
11140 tree d_spec = TYPE_RAISES_EXCEPTIONS (d_type);
11141 if (DEFERRED_NOEXCEPT_SPEC_P (e_spec))
11142 {
11143 if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11144 || (UNEVALUATED_NOEXCEPT_SPEC_P (e_spec)
11145 && !UNEVALUATED_NOEXCEPT_SPEC_P (d_spec)))
11146 {
11147 dump (dumper::MERGE)
11148 && dump ("Propagating instantiated noexcept to %N", existing);
11149 TREE_TYPE (existing) = d_type;
11150
11151 /* Propagate to existing clones. */
11152 tree clone;
11153 FOR_EACH_CLONE (clone, existing)
11154 {
11155 if (TREE_TYPE (clone) == e_type)
11156 TREE_TYPE (clone) = d_type;
11157 else
11158 TREE_TYPE (clone)
11159 = build_exception_variant (TREE_TYPE (clone), d_spec);
11160 }
11161 }
11162 }
11163 else if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11164 && !comp_except_specs (d_spec, e_spec, ce_type))
11165 goto mismatch;
11166 }
11167 /* Using cp_tree_equal because we can meet TYPE_ARGUMENT_PACKs
11168 here. I suspect the entities that directly do that are things
11169 that shouldn't go to duplicate_decls (FIELD_DECLs etc). */
11170 else if (!cp_tree_equal (TREE_TYPE (decl), TREE_TYPE (existing)))
11171 {
11172 mismatch:
11173 map_context_from = map_context_to = NULL_TREE;
11174 if (DECL_IS_UNDECLARED_BUILTIN (existing))
11175 /* Just like duplicate_decls, presum the user knows what
11176 they're doing in overriding a builtin. */
11177 TREE_TYPE (existing) = TREE_TYPE (decl);
11178 else
11179 {
11180 // FIXME:QOI Might be template specialization from a module,
11181 // not necessarily global module
11182 error_at (DECL_SOURCE_LOCATION (decl),
11183 "conflicting global module declaration %#qD", decl);
11184 inform (DECL_SOURCE_LOCATION (existing),
11185 "existing declaration %#qD", existing);
11186 return false;
11187 }
11188 }
11189
11190 map_context_from = map_context_to = NULL_TREE;
11191
11192 if (DECL_IS_UNDECLARED_BUILTIN (existing)
11193 && !DECL_IS_UNDECLARED_BUILTIN (decl))
11194 {
11195 /* We're matching a builtin that the user has yet to declare.
11196 We are the one! This is very much duplicate-decl
11197 shenanigans. */
11198 DECL_SOURCE_LOCATION (existing) = DECL_SOURCE_LOCATION (decl);
11199 if (TREE_CODE (decl) != TYPE_DECL)
11200 {
11201 /* Propagate exceptions etc. */
11202 TREE_TYPE (existing) = TREE_TYPE (decl);
11203 TREE_NOTHROW (existing) = TREE_NOTHROW (decl);
11204 }
11205 /* This is actually an import! */
11206 DECL_MODULE_IMPORT_P (existing) = true;
11207
11208 /* Yay, sliced! */
11209 existing->base = decl->base;
11210
11211 if (TREE_CODE (decl) == FUNCTION_DECL)
11212 {
11213 /* Ew :( */
11214 memcpy (&existing->decl_common.size,
11215 &decl->decl_common.size,
11216 (offsetof (tree_decl_common, pt_uid)
11217 - offsetof (tree_decl_common, size)));
11218 auto bltin_class = DECL_BUILT_IN_CLASS (decl);
11219 existing->function_decl.built_in_class = bltin_class;
11220 auto fncode = DECL_UNCHECKED_FUNCTION_CODE (decl);
11221 DECL_UNCHECKED_FUNCTION_CODE (existing) = fncode;
11222 if (existing->function_decl.built_in_class == BUILT_IN_NORMAL)
11223 {
11224 if (builtin_decl_explicit_p (built_in_function (fncode)))
11225 switch (fncode)
11226 {
11227 case BUILT_IN_STPCPY:
11228 set_builtin_decl_implicit_p
11229 (built_in_function (fncode), true);
11230 break;
11231 default:
11232 set_builtin_decl_declared_p
11233 (built_in_function (fncode), true);
11234 break;
11235 }
11236 copy_attributes_to_builtin (decl);
11237 }
11238 }
11239 }
11240
11241 if (VAR_OR_FUNCTION_DECL_P (decl)
11242 && DECL_TEMPLATE_INSTANTIATED (decl))
11243 /* Don't instantiate again! */
11244 DECL_TEMPLATE_INSTANTIATED (existing) = true;
11245
11246 tree e_inner = inner == decl ? existing : DECL_TEMPLATE_RESULT (existing);
11247
11248 if (TREE_CODE (inner) == FUNCTION_DECL
11249 && DECL_DECLARED_INLINE_P (inner))
11250 DECL_DECLARED_INLINE_P (e_inner) = true;
11251 if (!DECL_EXTERNAL (inner))
11252 DECL_EXTERNAL (e_inner) = false;
11253
11254 // FIXME: Check default tmpl and fn parms here
11255
11256 return true;
11257 }
11258
11259 /* FN is an implicit member function that we've discovered is new to
11260 the class. Add it to the TYPE_FIELDS chain and the method vector.
11261 Reset the appropriate classtype lazy flag. */
11262
11263 bool
11264 trees_in::install_implicit_member (tree fn)
11265 {
11266 tree ctx = DECL_CONTEXT (fn);
11267 tree name = DECL_NAME (fn);
11268 /* We know these are synthesized, so the set of expected prototypes
11269 is quite restricted. We're not validating correctness, just
11270 distinguishing beteeen the small set of possibilities. */
11271 tree parm_type = TREE_VALUE (FUNCTION_FIRST_USER_PARMTYPE (fn));
11272 if (IDENTIFIER_CTOR_P (name))
11273 {
11274 if (CLASSTYPE_LAZY_DEFAULT_CTOR (ctx)
11275 && VOID_TYPE_P (parm_type))
11276 CLASSTYPE_LAZY_DEFAULT_CTOR (ctx) = false;
11277 else if (!TYPE_REF_P (parm_type))
11278 return false;
11279 else if (CLASSTYPE_LAZY_COPY_CTOR (ctx)
11280 && !TYPE_REF_IS_RVALUE (parm_type))
11281 CLASSTYPE_LAZY_COPY_CTOR (ctx) = false;
11282 else if (CLASSTYPE_LAZY_MOVE_CTOR (ctx))
11283 CLASSTYPE_LAZY_MOVE_CTOR (ctx) = false;
11284 else
11285 return false;
11286 }
11287 else if (IDENTIFIER_DTOR_P (name))
11288 {
11289 if (CLASSTYPE_LAZY_DESTRUCTOR (ctx))
11290 CLASSTYPE_LAZY_DESTRUCTOR (ctx) = false;
11291 else
11292 return false;
11293 if (DECL_VIRTUAL_P (fn))
11294 /* A virtual dtor should have been created when the class
11295 became complete. */
11296 return false;
11297 }
11298 else if (name == assign_op_identifier)
11299 {
11300 if (!TYPE_REF_P (parm_type))
11301 return false;
11302 else if (CLASSTYPE_LAZY_COPY_ASSIGN (ctx)
11303 && !TYPE_REF_IS_RVALUE (parm_type))
11304 CLASSTYPE_LAZY_COPY_ASSIGN (ctx) = false;
11305 else if (CLASSTYPE_LAZY_MOVE_ASSIGN (ctx))
11306 CLASSTYPE_LAZY_MOVE_ASSIGN (ctx) = false;
11307 else
11308 return false;
11309 }
11310 else
11311 return false;
11312
11313 dump (dumper::MERGE) && dump ("Adding implicit member %N", fn);
11314
11315 DECL_CHAIN (fn) = TYPE_FIELDS (ctx);
11316 TYPE_FIELDS (ctx) = fn;
11317
11318 add_method (ctx, fn, false);
11319
11320 /* Propagate TYPE_FIELDS. */
11321 fixup_type_variants (ctx);
11322
11323 return true;
11324 }
11325
11326 /* Return non-zero if DECL has a definition that would be interesting to
11327 write out. */
11328
11329 static bool
11330 has_definition (tree decl)
11331 {
11332 bool is_tmpl = TREE_CODE (decl) == TEMPLATE_DECL;
11333 if (is_tmpl)
11334 decl = DECL_TEMPLATE_RESULT (decl);
11335
11336 switch (TREE_CODE (decl))
11337 {
11338 default:
11339 break;
11340
11341 case FUNCTION_DECL:
11342 if (!DECL_SAVED_TREE (decl))
11343 /* Not defined. */
11344 break;
11345
11346 if (DECL_DECLARED_INLINE_P (decl))
11347 return true;
11348
11349 if (DECL_THIS_STATIC (decl)
11350 && (header_module_p ()
11351 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl))))
11352 /* GM static function. */
11353 return true;
11354
11355 if (DECL_TEMPLATE_INFO (decl))
11356 {
11357 int use_tpl = DECL_USE_TEMPLATE (decl);
11358
11359 // FIXME: Partial specializations have definitions too.
11360 if (use_tpl < 2)
11361 return true;
11362 }
11363 break;
11364
11365 case TYPE_DECL:
11366 {
11367 tree type = TREE_TYPE (decl);
11368 if (type == TYPE_MAIN_VARIANT (type)
11369 && decl == TYPE_NAME (type)
11370 && (TREE_CODE (type) == ENUMERAL_TYPE
11371 ? TYPE_VALUES (type) : TYPE_FIELDS (type)))
11372 return true;
11373 }
11374 break;
11375
11376 case VAR_DECL:
11377 if (DECL_LANG_SPECIFIC (decl)
11378 && DECL_TEMPLATE_INFO (decl)
11379 && DECL_USE_TEMPLATE (decl) < 2)
11380 return DECL_INITIAL (decl);
11381 else
11382 {
11383 if (!DECL_INITIALIZED_P (decl))
11384 return false;
11385
11386 if (header_module_p ()
11387 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl)))
11388 /* GM static variable. */
11389 return true;
11390
11391 if (!TREE_CONSTANT (decl))
11392 return false;
11393
11394 return true;
11395 }
11396 break;
11397
11398 case CONCEPT_DECL:
11399 if (DECL_INITIAL (decl))
11400 return true;
11401
11402 break;
11403 }
11404
11405 return false;
11406 }
11407
11408 uintptr_t *
11409 trees_in::find_duplicate (tree existing)
11410 {
11411 if (!duplicates)
11412 return NULL;
11413
11414 return duplicates->get (existing);
11415 }
11416
11417 /* We're starting to read a duplicate DECL. EXISTING is the already
11418 known node. */
11419
11420 void
11421 trees_in::register_duplicate (tree decl, tree existing)
11422 {
11423 if (!duplicates)
11424 duplicates = new duplicate_hash_map (40);
11425
11426 bool existed;
11427 uintptr_t &slot = duplicates->get_or_insert (existing, &existed);
11428 gcc_checking_assert (!existed);
11429 slot = reinterpret_cast<uintptr_t> (decl);
11430 }
11431
11432 /* We've read a definition of MAYBE_EXISTING. If not a duplicate,
11433 return MAYBE_EXISTING (into which the definition should be
11434 installed). Otherwise return NULL if already known bad, or the
11435 duplicate we read (for ODR checking, or extracting addtional merge
11436 information). */
11437
11438 tree
11439 trees_in::odr_duplicate (tree maybe_existing, bool has_defn)
11440 {
11441 tree res = NULL_TREE;
11442
11443 if (uintptr_t *dup = find_duplicate (maybe_existing))
11444 {
11445 if (!(*dup & 1))
11446 res = reinterpret_cast<tree> (*dup);
11447 }
11448 else
11449 res = maybe_existing;
11450
11451 assert_definition (maybe_existing, res && !has_defn);
11452
11453 // FIXME: We probably need to return the template, so that the
11454 // template header can be checked?
11455 return res ? STRIP_TEMPLATE (res) : NULL_TREE;
11456 }
11457
11458 /* The following writer functions rely on the current behaviour of
11459 depset::hash::add_dependency making the decl and defn depset nodes
11460 depend on eachother. That way we don't have to worry about seeding
11461 the tree map with named decls that cannot be looked up by name (I.e
11462 template and function parms). We know the decl and definition will
11463 be in the same cluster, which is what we want. */
11464
11465 void
11466 trees_out::write_function_def (tree decl)
11467 {
11468 tree_node (DECL_RESULT (decl));
11469 tree_node (DECL_INITIAL (decl));
11470 tree_node (DECL_SAVED_TREE (decl));
11471 tree_node (DECL_FRIEND_CONTEXT (decl));
11472
11473 constexpr_fundef *cexpr = retrieve_constexpr_fundef (decl);
11474 int tag = 0;
11475 if (cexpr)
11476 {
11477 if (cexpr->result == error_mark_node)
11478 /* We'll stream the RESULT_DECL naturally during the
11479 serialization. We never need to fish it back again, so
11480 that's ok. */
11481 tag = 0;
11482 else
11483 tag = insert (cexpr->result);
11484 }
11485 if (streaming_p ())
11486 {
11487 i (tag);
11488 if (tag)
11489 dump (dumper::TREE)
11490 && dump ("Constexpr:%d result %N", tag, cexpr->result);
11491 }
11492 if (tag)
11493 {
11494 unsigned ix = 0;
11495 for (tree parm = cexpr->parms; parm; parm = DECL_CHAIN (parm), ix++)
11496 {
11497 tag = insert (parm);
11498 if (streaming_p ())
11499 dump (dumper::TREE)
11500 && dump ("Constexpr:%d parm:%u %N", tag, ix, parm);
11501 }
11502 tree_node (cexpr->body);
11503 }
11504
11505 if (streaming_p ())
11506 {
11507 unsigned flags = 0;
11508
11509 if (DECL_NOT_REALLY_EXTERN (decl))
11510 flags |= 1;
11511
11512 u (flags);
11513 }
11514 }
11515
11516 void
11517 trees_out::mark_function_def (tree)
11518 {
11519 }
11520
11521 bool
11522 trees_in::read_function_def (tree decl, tree maybe_template)
11523 {
11524 dump () && dump ("Reading function definition %N", decl);
11525 tree result = tree_node ();
11526 tree initial = tree_node ();
11527 tree saved = tree_node ();
11528 tree context = tree_node ();
11529 constexpr_fundef cexpr;
11530
11531 tree maybe_dup = odr_duplicate (maybe_template, DECL_SAVED_TREE (decl));
11532 bool installing = maybe_dup && !DECL_SAVED_TREE (decl);
11533
11534 if (maybe_dup)
11535 for (auto parm = DECL_ARGUMENTS (maybe_dup); parm; parm = DECL_CHAIN (parm))
11536 DECL_CONTEXT (parm) = decl;
11537
11538 if (int wtag = i ())
11539 {
11540 int tag = 1;
11541 cexpr.result = error_mark_node;
11542
11543 cexpr.result = copy_decl (result);
11544 tag = insert (cexpr.result);
11545
11546 if (wtag != tag)
11547 set_overrun ();
11548 dump (dumper::TREE)
11549 && dump ("Constexpr:%d result %N", tag, cexpr.result);
11550
11551 cexpr.parms = NULL_TREE;
11552 tree *chain = &cexpr.parms;
11553 unsigned ix = 0;
11554 for (tree parm = DECL_ARGUMENTS (maybe_dup ? maybe_dup : decl);
11555 parm; parm = DECL_CHAIN (parm), ix++)
11556 {
11557 tree p = copy_decl (parm);
11558 tag = insert (p);
11559 dump (dumper::TREE)
11560 && dump ("Constexpr:%d parm:%u %N", tag, ix, p);
11561 *chain = p;
11562 chain = &DECL_CHAIN (p);
11563 }
11564 cexpr.body = tree_node ();
11565 cexpr.decl = decl;
11566 }
11567 else
11568 cexpr.decl = NULL_TREE;
11569
11570 unsigned flags = u ();
11571
11572 if (get_overrun ())
11573 return NULL_TREE;
11574
11575 if (installing)
11576 {
11577 DECL_NOT_REALLY_EXTERN (decl) = flags & 1;
11578 DECL_RESULT (decl) = result;
11579 DECL_INITIAL (decl) = initial;
11580 DECL_SAVED_TREE (decl) = saved;
11581 if (maybe_dup)
11582 DECL_ARGUMENTS (decl) = DECL_ARGUMENTS (maybe_dup);
11583
11584 if (context)
11585 SET_DECL_FRIEND_CONTEXT (decl, context);
11586 if (cexpr.decl)
11587 register_constexpr_fundef (cexpr);
11588 post_process (maybe_template);
11589 }
11590 else if (maybe_dup)
11591 {
11592 // FIXME:QOI Check matching defn
11593 }
11594
11595 return true;
11596 }
11597
11598 /* Also for CONCEPT_DECLs. */
11599
11600 void
11601 trees_out::write_var_def (tree decl)
11602 {
11603 tree init = DECL_INITIAL (decl);
11604 tree_node (init);
11605 if (!init)
11606 {
11607 tree dyn_init = NULL_TREE;
11608
11609 if (DECL_NONTRIVIALLY_INITIALIZED_P (decl))
11610 {
11611 dyn_init = value_member (decl,
11612 CP_DECL_THREAD_LOCAL_P (decl)
11613 ? tls_aggregates : static_aggregates);
11614 gcc_checking_assert (dyn_init);
11615 /* Mark it so write_inits knows this is needed. */
11616 TREE_LANG_FLAG_0 (dyn_init) = true;
11617 dyn_init = TREE_PURPOSE (dyn_init);
11618 }
11619 tree_node (dyn_init);
11620 }
11621 }
11622
11623 void
11624 trees_out::mark_var_def (tree)
11625 {
11626 }
11627
11628 bool
11629 trees_in::read_var_def (tree decl, tree maybe_template)
11630 {
11631 /* Do not mark the virtual table entries as used. */
11632 bool vtable = TREE_CODE (decl) == VAR_DECL && DECL_VTABLE_OR_VTT_P (decl);
11633 unused += vtable;
11634 tree init = tree_node ();
11635 tree dyn_init = init ? NULL_TREE : tree_node ();
11636 unused -= vtable;
11637
11638 if (get_overrun ())
11639 return false;
11640
11641 bool initialized = (VAR_P (decl) ? bool (DECL_INITIALIZED_P (decl))
11642 : bool (DECL_INITIAL (decl)));
11643 tree maybe_dup = odr_duplicate (maybe_template, initialized);
11644 bool installing = maybe_dup && !initialized;
11645 if (installing)
11646 {
11647 if (DECL_EXTERNAL (decl))
11648 DECL_NOT_REALLY_EXTERN (decl) = true;
11649 if (VAR_P (decl))
11650 DECL_INITIALIZED_P (decl) = true;
11651 DECL_INITIAL (decl) = init;
11652 if (!dyn_init)
11653 ;
11654 else if (CP_DECL_THREAD_LOCAL_P (decl))
11655 tls_aggregates = tree_cons (dyn_init, decl, tls_aggregates);
11656 else
11657 static_aggregates = tree_cons (dyn_init, decl, static_aggregates);
11658 }
11659 else if (maybe_dup)
11660 {
11661 // FIXME:QOI Check matching defn
11662 }
11663
11664 return true;
11665 }
11666
11667 /* If MEMBER doesn't have an independent life outside the class,
11668 return it (or it's TEMPLATE_DECL). Otherwise NULL. */
11669
11670 static tree
11671 member_owned_by_class (tree member)
11672 {
11673 gcc_assert (DECL_P (member));
11674
11675 /* Clones are owned by their origin. */
11676 if (DECL_CLONED_FUNCTION_P (member))
11677 return NULL;
11678
11679 if (TREE_CODE (member) == FIELD_DECL)
11680 /* FIELD_DECLS can have template info in some cases. We always
11681 want the FIELD_DECL though, as there's never a TEMPLATE_DECL
11682 wrapping them. */
11683 return member;
11684
11685 int use_tpl = -1;
11686 if (tree ti = node_template_info (member, use_tpl))
11687 {
11688 // FIXME: Don't bail on things that CANNOT have their own
11689 // template header. No, make sure they're in the same cluster.
11690 if (use_tpl > 0)
11691 return NULL_TREE;
11692
11693 if (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == member)
11694 member = TI_TEMPLATE (ti);
11695 }
11696 return member;
11697 }
11698
11699 void
11700 trees_out::write_class_def (tree defn)
11701 {
11702 gcc_assert (DECL_P (defn));
11703 if (streaming_p ())
11704 dump () && dump ("Writing class definition %N", defn);
11705
11706 tree type = TREE_TYPE (defn);
11707 tree_node (TYPE_SIZE (type));
11708 tree_node (TYPE_SIZE_UNIT (type));
11709 tree_node (TYPE_VFIELD (type));
11710 tree_node (TYPE_BINFO (type));
11711
11712 vec_chained_decls (TYPE_FIELDS (type));
11713
11714 /* Every class but __as_base has a type-specific. */
11715 gcc_checking_assert (!TYPE_LANG_SPECIFIC (type) == IS_FAKE_BASE_TYPE (type));
11716
11717 if (TYPE_LANG_SPECIFIC (type))
11718 {
11719 {
11720 vec<tree, va_gc> *v = CLASSTYPE_MEMBER_VEC (type);
11721 if (!v)
11722 {
11723 gcc_checking_assert (!streaming_p ());
11724 /* Force a class vector. */
11725 v = set_class_bindings (type, -1);
11726 gcc_checking_assert (v);
11727 }
11728
11729 unsigned len = v->length ();
11730 if (streaming_p ())
11731 u (len);
11732 for (unsigned ix = 0; ix != len; ix++)
11733 {
11734 tree m = (*v)[ix];
11735 if (TREE_CODE (m) == TYPE_DECL
11736 && DECL_ARTIFICIAL (m)
11737 && TYPE_STUB_DECL (TREE_TYPE (m)) == m)
11738 /* This is a using-decl for a type, or an anonymous
11739 struct (maybe with a typedef name). Write the type. */
11740 m = TREE_TYPE (m);
11741 tree_node (m);
11742 }
11743 }
11744 tree_node (CLASSTYPE_LAMBDA_EXPR (type));
11745
11746 /* TYPE_CONTAINS_VPTR_P looks at the vbase vector, which the
11747 reader won't know at this point. */
11748 int has_vptr = TYPE_CONTAINS_VPTR_P (type);
11749
11750 if (streaming_p ())
11751 {
11752 unsigned nvbases = vec_safe_length (CLASSTYPE_VBASECLASSES (type));
11753 u (nvbases);
11754 i (has_vptr);
11755 }
11756
11757 if (has_vptr)
11758 {
11759 tree_vec (CLASSTYPE_PURE_VIRTUALS (type));
11760 tree_pair_vec (CLASSTYPE_VCALL_INDICES (type));
11761 tree_node (CLASSTYPE_KEY_METHOD (type));
11762 }
11763 }
11764
11765 if (TYPE_LANG_SPECIFIC (type))
11766 {
11767 tree_node (CLASSTYPE_PRIMARY_BINFO (type));
11768
11769 tree as_base = CLASSTYPE_AS_BASE (type);
11770 if (as_base)
11771 as_base = TYPE_NAME (as_base);
11772 tree_node (as_base);
11773
11774 /* Write the vtables. */
11775 tree vtables = CLASSTYPE_VTABLES (type);
11776 vec_chained_decls (vtables);
11777 for (; vtables; vtables = TREE_CHAIN (vtables))
11778 write_definition (vtables);
11779
11780 /* Write the friend classes. */
11781 tree_list (CLASSTYPE_FRIEND_CLASSES (type), false);
11782
11783 /* Write the friend functions. */
11784 for (tree friends = DECL_FRIENDLIST (defn);
11785 friends; friends = TREE_CHAIN (friends))
11786 {
11787 /* Name of these friends. */
11788 tree_node (TREE_PURPOSE (friends));
11789 tree_list (TREE_VALUE (friends), false);
11790 }
11791 /* End of friend fns. */
11792 tree_node (NULL_TREE);
11793
11794 /* Write the decl list. */
11795 tree_list (CLASSTYPE_DECL_LIST (type), true);
11796
11797 if (TYPE_CONTAINS_VPTR_P (type))
11798 {
11799 /* Write the thunks. */
11800 for (tree decls = TYPE_FIELDS (type);
11801 decls; decls = DECL_CHAIN (decls))
11802 if (TREE_CODE (decls) == FUNCTION_DECL
11803 && DECL_VIRTUAL_P (decls)
11804 && DECL_THUNKS (decls))
11805 {
11806 tree_node (decls);
11807 /* Thunks are always unique, so chaining is ok. */
11808 chained_decls (DECL_THUNKS (decls));
11809 }
11810 tree_node (NULL_TREE);
11811 }
11812 }
11813 }
11814
11815 void
11816 trees_out::mark_class_member (tree member, bool do_defn)
11817 {
11818 gcc_assert (DECL_P (member));
11819
11820 member = member_owned_by_class (member);
11821 if (member)
11822 mark_declaration (member, do_defn && has_definition (member));
11823 }
11824
11825 void
11826 trees_out::mark_class_def (tree defn)
11827 {
11828 gcc_assert (DECL_P (defn));
11829 tree type = TREE_TYPE (defn);
11830 /* Mark the class members that are not type-decls and cannot have
11831 independent definitions. */
11832 for (tree member = TYPE_FIELDS (type); member; member = DECL_CHAIN (member))
11833 if (TREE_CODE (member) == FIELD_DECL
11834 || TREE_CODE (member) == USING_DECL
11835 /* A cloned enum-decl from 'using enum unrelated;' */
11836 || (TREE_CODE (member) == CONST_DECL
11837 && DECL_CONTEXT (member) == type))
11838 {
11839 mark_class_member (member);
11840 if (TREE_CODE (member) == FIELD_DECL)
11841 if (tree repr = DECL_BIT_FIELD_REPRESENTATIVE (member))
11842 mark_declaration (repr, false);
11843 }
11844
11845 /* Mark the binfo hierarchy. */
11846 for (tree child = TYPE_BINFO (type); child; child = TREE_CHAIN (child))
11847 mark_by_value (child);
11848
11849 if (TYPE_LANG_SPECIFIC (type))
11850 {
11851 for (tree vtable = CLASSTYPE_VTABLES (type);
11852 vtable; vtable = TREE_CHAIN (vtable))
11853 mark_declaration (vtable, true);
11854
11855 if (TYPE_CONTAINS_VPTR_P (type))
11856 /* Mark the thunks, they belong to the class definition,
11857 /not/ the thunked-to function. */
11858 for (tree decls = TYPE_FIELDS (type);
11859 decls; decls = DECL_CHAIN (decls))
11860 if (TREE_CODE (decls) == FUNCTION_DECL)
11861 for (tree thunks = DECL_THUNKS (decls);
11862 thunks; thunks = DECL_CHAIN (thunks))
11863 mark_declaration (thunks, false);
11864 }
11865 }
11866
11867 /* Nop sorting, needed for resorting the member vec. */
11868
11869 static void
11870 nop (void *, void *)
11871 {
11872 }
11873
11874 bool
11875 trees_in::read_class_def (tree defn, tree maybe_template)
11876 {
11877 gcc_assert (DECL_P (defn));
11878 dump () && dump ("Reading class definition %N", defn);
11879 tree type = TREE_TYPE (defn);
11880 tree size = tree_node ();
11881 tree size_unit = tree_node ();
11882 tree vfield = tree_node ();
11883 tree binfo = tree_node ();
11884 vec<tree, va_gc> *vbase_vec = NULL;
11885 vec<tree, va_gc> *member_vec = NULL;
11886 vec<tree, va_gc> *pure_virts = NULL;
11887 vec<tree_pair_s, va_gc> *vcall_indices = NULL;
11888 tree key_method = NULL_TREE;
11889 tree lambda = NULL_TREE;
11890
11891 /* Read the fields. */
11892 vec<tree, va_heap> *fields = vec_chained_decls ();
11893
11894 if (TYPE_LANG_SPECIFIC (type))
11895 {
11896 if (unsigned len = u ())
11897 {
11898 vec_alloc (member_vec, len);
11899 for (unsigned ix = 0; ix != len; ix++)
11900 {
11901 tree m = tree_node ();
11902 if (get_overrun ())
11903 break;
11904 if (TYPE_P (m))
11905 m = TYPE_STUB_DECL (m);
11906 member_vec->quick_push (m);
11907 }
11908 }
11909 lambda = tree_node ();
11910
11911 if (!get_overrun ())
11912 {
11913 unsigned nvbases = u ();
11914 if (nvbases)
11915 {
11916 vec_alloc (vbase_vec, nvbases);
11917 for (tree child = binfo; child; child = TREE_CHAIN (child))
11918 if (BINFO_VIRTUAL_P (child))
11919 vbase_vec->quick_push (child);
11920 }
11921 }
11922
11923 if (!get_overrun ())
11924 {
11925 int has_vptr = i ();
11926 if (has_vptr)
11927 {
11928 pure_virts = tree_vec ();
11929 vcall_indices = tree_pair_vec ();
11930 key_method = tree_node ();
11931 }
11932 }
11933 }
11934
11935 tree maybe_dup = odr_duplicate (maybe_template, TYPE_SIZE (type));
11936 bool installing = maybe_dup && !TYPE_SIZE (type);
11937 if (installing)
11938 {
11939 if (DECL_EXTERNAL (defn) && TYPE_LANG_SPECIFIC (type))
11940 {
11941 /* We don't deal with not-really-extern, because, for a
11942 module you want the import to be the interface, and for a
11943 header-unit, you're doing it wrong. */
11944 CLASSTYPE_INTERFACE_UNKNOWN (type) = false;
11945 CLASSTYPE_INTERFACE_ONLY (type) = true;
11946 }
11947
11948 if (maybe_dup != defn)
11949 {
11950 // FIXME: This is needed on other defns too, almost
11951 // duplicate-decl like? See is_matching_decl too.
11952 /* Copy flags from the duplicate. */
11953 tree type_dup = TREE_TYPE (maybe_dup);
11954
11955 /* Core pieces. */
11956 TYPE_MODE_RAW (type) = TYPE_MODE_RAW (type_dup);
11957 SET_DECL_MODE (defn, DECL_MODE (maybe_dup));
11958 TREE_ADDRESSABLE (type) = TREE_ADDRESSABLE (type_dup);
11959 DECL_SIZE (defn) = DECL_SIZE (maybe_dup);
11960 DECL_SIZE_UNIT (defn) = DECL_SIZE_UNIT (maybe_dup);
11961 DECL_ALIGN_RAW (defn) = DECL_ALIGN_RAW (maybe_dup);
11962 DECL_WARN_IF_NOT_ALIGN_RAW (defn)
11963 = DECL_WARN_IF_NOT_ALIGN_RAW (maybe_dup);
11964 DECL_USER_ALIGN (defn) = DECL_USER_ALIGN (maybe_dup);
11965
11966 /* C++ pieces. */
11967 TYPE_POLYMORPHIC_P (type) = TYPE_POLYMORPHIC_P (type_dup);
11968 TYPE_HAS_USER_CONSTRUCTOR (type)
11969 = TYPE_HAS_USER_CONSTRUCTOR (type_dup);
11970 TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type)
11971 = TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type_dup);
11972
11973 if (auto ls = TYPE_LANG_SPECIFIC (type_dup))
11974 {
11975 if (TYPE_LANG_SPECIFIC (type))
11976 {
11977 CLASSTYPE_BEFRIENDING_CLASSES (type_dup)
11978 = CLASSTYPE_BEFRIENDING_CLASSES (type);
11979 CLASSTYPE_TYPEINFO_VAR (type_dup)
11980 = CLASSTYPE_TYPEINFO_VAR (type);
11981 }
11982 for (tree v = type; v; v = TYPE_NEXT_VARIANT (v))
11983 TYPE_LANG_SPECIFIC (v) = ls;
11984 }
11985 }
11986
11987 TYPE_SIZE (type) = size;
11988 TYPE_SIZE_UNIT (type) = size_unit;
11989
11990 if (fields)
11991 {
11992 tree *chain = &TYPE_FIELDS (type);
11993 unsigned len = fields->length ();
11994 for (unsigned ix = 0; ix != len; ix++)
11995 {
11996 tree decl = (*fields)[ix];
11997
11998 if (!decl)
11999 {
12000 /* An anonymous struct with typedef name. */
12001 tree tdef = (*fields)[ix+1];
12002 decl = TYPE_STUB_DECL (TREE_TYPE (tdef));
12003 gcc_checking_assert (IDENTIFIER_ANON_P (DECL_NAME (decl))
12004 && decl != tdef);
12005 }
12006
12007 gcc_checking_assert (!*chain == !DECL_CLONED_FUNCTION_P (decl));
12008 *chain = decl;
12009 chain = &DECL_CHAIN (decl);
12010
12011 if (TREE_CODE (decl) == USING_DECL
12012 && TREE_CODE (USING_DECL_SCOPE (decl)) == RECORD_TYPE)
12013 {
12014 /* Reconstruct DECL_ACCESS. */
12015 tree decls = USING_DECL_DECLS (decl);
12016 tree access = declared_access (decl);
12017
12018 for (ovl_iterator iter (decls); iter; ++iter)
12019 {
12020 tree d = *iter;
12021
12022 retrofit_lang_decl (d);
12023 tree list = DECL_ACCESS (d);
12024
12025 if (!purpose_member (type, list))
12026 DECL_ACCESS (d) = tree_cons (type, access, list);
12027 }
12028 }
12029 }
12030 }
12031
12032 TYPE_VFIELD (type) = vfield;
12033 TYPE_BINFO (type) = binfo;
12034
12035 if (TYPE_LANG_SPECIFIC (type))
12036 {
12037 CLASSTYPE_LAMBDA_EXPR (type) = lambda;
12038
12039 CLASSTYPE_MEMBER_VEC (type) = member_vec;
12040 CLASSTYPE_PURE_VIRTUALS (type) = pure_virts;
12041 CLASSTYPE_VCALL_INDICES (type) = vcall_indices;
12042
12043 CLASSTYPE_KEY_METHOD (type) = key_method;
12044
12045 CLASSTYPE_VBASECLASSES (type) = vbase_vec;
12046
12047 /* Resort the member vector. */
12048 resort_type_member_vec (member_vec, NULL, nop, NULL);
12049 }
12050 }
12051 else if (maybe_dup)
12052 {
12053 // FIXME:QOI Check matching defn
12054 }
12055
12056 if (TYPE_LANG_SPECIFIC (type))
12057 {
12058 tree primary = tree_node ();
12059 tree as_base = tree_node ();
12060
12061 if (as_base)
12062 as_base = TREE_TYPE (as_base);
12063
12064 /* Read the vtables. */
12065 vec<tree, va_heap> *vtables = vec_chained_decls ();
12066 if (vtables)
12067 {
12068 unsigned len = vtables->length ();
12069 for (unsigned ix = 0; ix != len; ix++)
12070 {
12071 tree vtable = (*vtables)[ix];
12072 read_var_def (vtable, vtable);
12073 }
12074 }
12075
12076 tree friend_classes = tree_list (false);
12077 tree friend_functions = NULL_TREE;
12078 for (tree *chain = &friend_functions;
12079 tree name = tree_node (); chain = &TREE_CHAIN (*chain))
12080 {
12081 tree val = tree_list (false);
12082 *chain = build_tree_list (name, val);
12083 }
12084 tree decl_list = tree_list (true);
12085
12086 if (installing)
12087 {
12088 CLASSTYPE_PRIMARY_BINFO (type) = primary;
12089 CLASSTYPE_AS_BASE (type) = as_base;
12090
12091 if (vtables)
12092 {
12093 if (!CLASSTYPE_KEY_METHOD (type)
12094 /* Sneaky user may have defined it inline
12095 out-of-class. */
12096 || DECL_DECLARED_INLINE_P (CLASSTYPE_KEY_METHOD (type)))
12097 vec_safe_push (keyed_classes, type);
12098 unsigned len = vtables->length ();
12099 tree *chain = &CLASSTYPE_VTABLES (type);
12100 for (unsigned ix = 0; ix != len; ix++)
12101 {
12102 tree vtable = (*vtables)[ix];
12103 gcc_checking_assert (!*chain);
12104 *chain = vtable;
12105 chain = &DECL_CHAIN (vtable);
12106 }
12107 }
12108 CLASSTYPE_FRIEND_CLASSES (type) = friend_classes;
12109 DECL_FRIENDLIST (defn) = friend_functions;
12110 CLASSTYPE_DECL_LIST (type) = decl_list;
12111
12112 for (; friend_classes; friend_classes = TREE_CHAIN (friend_classes))
12113 {
12114 tree f = TREE_VALUE (friend_classes);
12115
12116 if (TYPE_P (f))
12117 {
12118 CLASSTYPE_BEFRIENDING_CLASSES (f)
12119 = tree_cons (NULL_TREE, type,
12120 CLASSTYPE_BEFRIENDING_CLASSES (f));
12121 dump () && dump ("Class %N befriending %C:%N",
12122 type, TREE_CODE (f), f);
12123 }
12124 }
12125
12126 for (; friend_functions;
12127 friend_functions = TREE_CHAIN (friend_functions))
12128 for (tree friend_decls = TREE_VALUE (friend_functions);
12129 friend_decls; friend_decls = TREE_CHAIN (friend_decls))
12130 {
12131 tree f = TREE_VALUE (friend_decls);
12132
12133 DECL_BEFRIENDING_CLASSES (f)
12134 = tree_cons (NULL_TREE, type, DECL_BEFRIENDING_CLASSES (f));
12135 dump () && dump ("Class %N befriending %C:%N",
12136 type, TREE_CODE (f), f);
12137 }
12138 }
12139
12140 if (TYPE_CONTAINS_VPTR_P (type))
12141 /* Read and install the thunks. */
12142 while (tree vfunc = tree_node ())
12143 {
12144 tree thunks = chained_decls ();
12145 if (installing)
12146 SET_DECL_THUNKS (vfunc, thunks);
12147 }
12148
12149 vec_free (vtables);
12150 }
12151
12152 /* Propagate to all variants. */
12153 if (installing)
12154 fixup_type_variants (type);
12155
12156 /* IS_FAKE_BASE_TYPE is inaccurate at this point, because if this is
12157 the fake base, we've not hooked it into the containing class's
12158 data structure yet. Fortunately it has a unique name. */
12159 if (installing
12160 && DECL_NAME (defn) != as_base_identifier
12161 && (!CLASSTYPE_TEMPLATE_INFO (type)
12162 || !uses_template_parms (TI_ARGS (CLASSTYPE_TEMPLATE_INFO (type)))))
12163 /* Emit debug info. It'd be nice to know if the interface TU
12164 already emitted this. */
12165 rest_of_type_compilation (type, !LOCAL_CLASS_P (type));
12166
12167 vec_free (fields);
12168
12169 return !get_overrun ();
12170 }
12171
12172 void
12173 trees_out::write_enum_def (tree decl)
12174 {
12175 tree type = TREE_TYPE (decl);
12176
12177 tree_node (TYPE_VALUES (type));
12178 tree_node (TYPE_MIN_VALUE (type));
12179 tree_node (TYPE_MAX_VALUE (type));
12180 }
12181
12182 void
12183 trees_out::mark_enum_def (tree decl)
12184 {
12185 tree type = TREE_TYPE (decl);
12186
12187 for (tree values = TYPE_VALUES (type); values; values = TREE_CHAIN (values))
12188 {
12189 tree cst = TREE_VALUE (values);
12190 mark_by_value (cst);
12191 /* We must mark the init to avoid circularity in tt_enum_int. */
12192 if (tree init = DECL_INITIAL (cst))
12193 if (TREE_CODE (init) == INTEGER_CST)
12194 mark_by_value (init);
12195 }
12196 }
12197
12198 bool
12199 trees_in::read_enum_def (tree defn, tree maybe_template)
12200 {
12201 tree type = TREE_TYPE (defn);
12202 tree values = tree_node ();
12203 tree min = tree_node ();
12204 tree max = tree_node ();
12205
12206 if (get_overrun ())
12207 return false;
12208
12209 tree maybe_dup = odr_duplicate (maybe_template, TYPE_VALUES (type));
12210 bool installing = maybe_dup && !TYPE_VALUES (type);
12211
12212 if (installing)
12213 {
12214 TYPE_VALUES (type) = values;
12215 TYPE_MIN_VALUE (type) = min;
12216 TYPE_MAX_VALUE (type) = max;
12217
12218 rest_of_type_compilation (type, DECL_NAMESPACE_SCOPE_P (defn));
12219 }
12220 else if (maybe_dup)
12221 {
12222 tree known = TYPE_VALUES (type);
12223 for (; known && values;
12224 known = TREE_CHAIN (known), values = TREE_CHAIN (values))
12225 {
12226 tree known_decl = TREE_VALUE (known);
12227 tree new_decl = TREE_VALUE (values);
12228
12229 if (DECL_NAME (known_decl) != DECL_NAME (new_decl))
12230 goto bad;
12231
12232 new_decl = maybe_duplicate (new_decl);
12233
12234 if (!cp_tree_equal (DECL_INITIAL (known_decl),
12235 DECL_INITIAL (new_decl)))
12236 goto bad;
12237 }
12238
12239 if (known || values)
12240 goto bad;
12241
12242 if (!cp_tree_equal (TYPE_MIN_VALUE (type), min)
12243 || !cp_tree_equal (TYPE_MAX_VALUE (type), max))
12244 {
12245 bad:;
12246 error_at (DECL_SOURCE_LOCATION (maybe_dup),
12247 "definition of %qD does not match", maybe_dup);
12248 inform (DECL_SOURCE_LOCATION (defn),
12249 "existing definition %qD", defn);
12250
12251 tree known_decl = NULL_TREE, new_decl = NULL_TREE;
12252
12253 if (known)
12254 known_decl = TREE_VALUE (known);
12255 if (values)
12256 new_decl = maybe_duplicate (TREE_VALUE (values));
12257
12258 if (known_decl && new_decl)
12259 {
12260 inform (DECL_SOURCE_LOCATION (new_decl),
12261 "... this enumerator %qD", new_decl);
12262 inform (DECL_SOURCE_LOCATION (known_decl),
12263 "enumerator %qD does not match ...", known_decl);
12264 }
12265 else if (known_decl || new_decl)
12266 {
12267 tree extra = known_decl ? known_decl : new_decl;
12268 inform (DECL_SOURCE_LOCATION (extra),
12269 "additional enumerators beginning with %qD", extra);
12270 }
12271 else
12272 inform (DECL_SOURCE_LOCATION (maybe_dup),
12273 "enumeration range differs");
12274
12275 /* Mark it bad. */
12276 unmatched_duplicate (maybe_template);
12277 }
12278 }
12279
12280 return true;
12281 }
12282
12283 /* Write out the body of DECL. See above circularity note. */
12284
12285 void
12286 trees_out::write_definition (tree decl)
12287 {
12288 if (streaming_p ())
12289 {
12290 assert_definition (decl);
12291 dump ()
12292 && dump ("Writing definition %C:%N", TREE_CODE (decl), decl);
12293 }
12294 else
12295 dump (dumper::DEPEND)
12296 && dump ("Depending definition %C:%N", TREE_CODE (decl), decl);
12297
12298 again:
12299 switch (TREE_CODE (decl))
12300 {
12301 default:
12302 gcc_unreachable ();
12303
12304 case TEMPLATE_DECL:
12305 decl = DECL_TEMPLATE_RESULT (decl);
12306 goto again;
12307
12308 case FUNCTION_DECL:
12309 write_function_def (decl);
12310 break;
12311
12312 case TYPE_DECL:
12313 {
12314 tree type = TREE_TYPE (decl);
12315 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12316 && TYPE_NAME (type) == decl);
12317 if (TREE_CODE (type) == ENUMERAL_TYPE)
12318 write_enum_def (decl);
12319 else
12320 write_class_def (decl);
12321 }
12322 break;
12323
12324 case VAR_DECL:
12325 case CONCEPT_DECL:
12326 write_var_def (decl);
12327 break;
12328 }
12329 }
12330
12331 /* Mark a declaration for by-value walking. If DO_DEFN is true, mark
12332 its body too. */
12333
12334 void
12335 trees_out::mark_declaration (tree decl, bool do_defn)
12336 {
12337 mark_by_value (decl);
12338
12339 if (TREE_CODE (decl) == TEMPLATE_DECL)
12340 decl = DECL_TEMPLATE_RESULT (decl);
12341
12342 if (!do_defn)
12343 return;
12344
12345 switch (TREE_CODE (decl))
12346 {
12347 default:
12348 gcc_unreachable ();
12349
12350 case FUNCTION_DECL:
12351 mark_function_def (decl);
12352 break;
12353
12354 case TYPE_DECL:
12355 {
12356 tree type = TREE_TYPE (decl);
12357 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12358 && TYPE_NAME (type) == decl);
12359 if (TREE_CODE (type) == ENUMERAL_TYPE)
12360 mark_enum_def (decl);
12361 else
12362 mark_class_def (decl);
12363 }
12364 break;
12365
12366 case VAR_DECL:
12367 case CONCEPT_DECL:
12368 mark_var_def (decl);
12369 break;
12370 }
12371 }
12372
12373 /* Read in the body of DECL. See above circularity note. */
12374
12375 bool
12376 trees_in::read_definition (tree decl)
12377 {
12378 dump () && dump ("Reading definition %C %N", TREE_CODE (decl), decl);
12379
12380 tree maybe_template = decl;
12381
12382 again:
12383 switch (TREE_CODE (decl))
12384 {
12385 default:
12386 break;
12387
12388 case TEMPLATE_DECL:
12389 decl = DECL_TEMPLATE_RESULT (decl);
12390 goto again;
12391
12392 case FUNCTION_DECL:
12393 return read_function_def (decl, maybe_template);
12394
12395 case TYPE_DECL:
12396 {
12397 tree type = TREE_TYPE (decl);
12398 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12399 && TYPE_NAME (type) == decl);
12400 if (TREE_CODE (type) == ENUMERAL_TYPE)
12401 return read_enum_def (decl, maybe_template);
12402 else
12403 return read_class_def (decl, maybe_template);
12404 }
12405 break;
12406
12407 case VAR_DECL:
12408 case CONCEPT_DECL:
12409 return read_var_def (decl, maybe_template);
12410 }
12411
12412 return false;
12413 }
12414
12415 /* Lookup an maybe insert a slot for depset for KEY. */
12416
12417 depset **
12418 depset::hash::entity_slot (tree entity, bool insert)
12419 {
12420 traits::compare_type key (entity, NULL);
12421 depset **slot = find_slot_with_hash (key, traits::hash (key),
12422 insert ? INSERT : NO_INSERT);
12423
12424 return slot;
12425 }
12426
12427 depset **
12428 depset::hash::binding_slot (tree ctx, tree name, bool insert)
12429 {
12430 traits::compare_type key (ctx, name);
12431 depset **slot = find_slot_with_hash (key, traits::hash (key),
12432 insert ? INSERT : NO_INSERT);
12433
12434 return slot;
12435 }
12436
12437 depset *
12438 depset::hash::find_dependency (tree decl)
12439 {
12440 depset **slot = entity_slot (decl, false);
12441
12442 return slot ? *slot : NULL;
12443 }
12444
12445 depset *
12446 depset::hash::find_binding (tree ctx, tree name)
12447 {
12448 depset **slot = binding_slot (ctx, name, false);
12449
12450 return slot ? *slot : NULL;
12451 }
12452
12453 /* DECL is a newly discovered dependency. Create the depset, if it
12454 doesn't already exist. Add it to the worklist if so.
12455
12456 DECL will be an OVL_USING_P OVERLOAD, if it's from a binding that's
12457 a using decl.
12458
12459 We do not have to worry about adding the same dependency more than
12460 once. First it's harmless, but secondly the TREE_VISITED marking
12461 prevents us wanting to do it anyway. */
12462
12463 depset *
12464 depset::hash::make_dependency (tree decl, entity_kind ek)
12465 {
12466 /* Make sure we're being told consistent information. */
12467 gcc_checking_assert ((ek == EK_NAMESPACE)
12468 == (TREE_CODE (decl) == NAMESPACE_DECL
12469 && !DECL_NAMESPACE_ALIAS (decl)));
12470 gcc_checking_assert (ek != EK_BINDING && ek != EK_REDIRECT);
12471 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
12472 && (TREE_CODE (decl) != USING_DECL
12473 || TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL));
12474 gcc_checking_assert (!is_key_order ());
12475 if (ek == EK_USING)
12476 gcc_checking_assert (TREE_CODE (decl) == OVERLOAD);
12477
12478 if (TREE_CODE (decl) == TEMPLATE_DECL)
12479 {
12480 /* The template should have copied these from its result decl. */
12481 tree res = DECL_TEMPLATE_RESULT (decl);
12482
12483 gcc_checking_assert (DECL_MODULE_EXPORT_P (decl)
12484 == DECL_MODULE_EXPORT_P (res));
12485 if (DECL_LANG_SPECIFIC (res))
12486 {
12487 gcc_checking_assert (DECL_MODULE_PURVIEW_P (decl)
12488 == DECL_MODULE_PURVIEW_P (res));
12489 gcc_checking_assert ((DECL_MODULE_IMPORT_P (decl)
12490 == DECL_MODULE_IMPORT_P (res)));
12491 }
12492 }
12493
12494 depset **slot = entity_slot (decl, true);
12495 depset *dep = *slot;
12496 bool for_binding = ek == EK_FOR_BINDING;
12497
12498 if (!dep)
12499 {
12500 if (DECL_IMPLICIT_TYPEDEF_P (decl)
12501 /* ... not an enum, for instance. */
12502 && RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
12503 && TYPE_LANG_SPECIFIC (TREE_TYPE (decl))
12504 && CLASSTYPE_USE_TEMPLATE (TREE_TYPE (decl)) == 2)
12505 {
12506 /* A partial or explicit specialization. Partial
12507 specializations might not be in the hash table, because
12508 there can be multiple differently-constrained variants.
12509
12510 template<typename T> class silly;
12511 template<typename T> requires true class silly {};
12512
12513 We need to find them, insert their TEMPLATE_DECL in the
12514 dep_hash, and then convert the dep we just found into a
12515 redirect. */
12516
12517 tree ti = TYPE_TEMPLATE_INFO (TREE_TYPE (decl));
12518 tree tmpl = TI_TEMPLATE (ti);
12519 tree partial = NULL_TREE;
12520 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (tmpl);
12521 spec; spec = TREE_CHAIN (spec))
12522 if (DECL_TEMPLATE_RESULT (TREE_VALUE (spec)) == decl)
12523 {
12524 partial = TREE_VALUE (spec);
12525 break;
12526 }
12527
12528 if (partial)
12529 {
12530 /* Eagerly create an empty redirect. The following
12531 make_dependency call could cause hash reallocation,
12532 and invalidate slot's value. */
12533 depset *redirect = make_entity (decl, EK_REDIRECT);
12534
12535 /* Redirects are never reached -- always snap to their target. */
12536 redirect->set_flag_bit<DB_UNREACHED_BIT> ();
12537
12538 *slot = redirect;
12539
12540 depset *tmpl_dep = make_dependency (partial, EK_PARTIAL);
12541 gcc_checking_assert (tmpl_dep->get_entity_kind () == EK_PARTIAL);
12542
12543 redirect->deps.safe_push (tmpl_dep);
12544
12545 return redirect;
12546 }
12547 }
12548
12549 bool has_def = ek != EK_USING && has_definition (decl);
12550 if (ek > EK_BINDING)
12551 ek = EK_DECL;
12552
12553 /* The only OVERLOADS we should see are USING decls from
12554 bindings. */
12555 *slot = dep = make_entity (decl, ek, has_def);
12556
12557 if (TREE_CODE (decl) == TEMPLATE_DECL)
12558 {
12559 if (DECL_ALIAS_TEMPLATE_P (decl) && DECL_TEMPLATE_INFO (decl))
12560 dep->set_flag_bit<DB_ALIAS_TMPL_INST_BIT> ();
12561 else if (CHECKING_P)
12562 /* The template_result should otherwise not be in the
12563 table, or be an empty redirect (created above). */
12564 if (auto *eslot = entity_slot (DECL_TEMPLATE_RESULT (decl), false))
12565 gcc_checking_assert ((*eslot)->get_entity_kind () == EK_REDIRECT
12566 && !(*eslot)->deps.length ());
12567 }
12568
12569 if (ek != EK_USING
12570 && DECL_LANG_SPECIFIC (decl)
12571 && DECL_MODULE_IMPORT_P (decl))
12572 {
12573 /* Store the module number and index in cluster/section, so
12574 we don't have to look them up again. */
12575 unsigned index = import_entity_index (decl);
12576 module_state *from = import_entity_module (index);
12577 /* Remap will be zero for imports from partitions, which we
12578 want to treat as-if declared in this TU. */
12579 if (from->remap)
12580 {
12581 dep->cluster = index - from->entity_lwm;
12582 dep->section = from->remap;
12583 dep->set_flag_bit<DB_IMPORTED_BIT> ();
12584 }
12585 }
12586
12587 if (ek == EK_DECL
12588 && !dep->is_import ()
12589 && TREE_CODE (CP_DECL_CONTEXT (decl)) == NAMESPACE_DECL
12590 && !(TREE_CODE (decl) == TEMPLATE_DECL
12591 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl)))
12592 {
12593 tree ctx = CP_DECL_CONTEXT (decl);
12594 tree not_tmpl = STRIP_TEMPLATE (decl);
12595
12596 if (!TREE_PUBLIC (ctx))
12597 /* Member of internal namespace. */
12598 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
12599 else if (VAR_OR_FUNCTION_DECL_P (not_tmpl)
12600 && DECL_THIS_STATIC (not_tmpl))
12601 {
12602 /* An internal decl. This is ok in a GM entity. */
12603 if (!(header_module_p ()
12604 || !DECL_LANG_SPECIFIC (not_tmpl)
12605 || !DECL_MODULE_PURVIEW_P (not_tmpl)))
12606 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
12607 }
12608
12609 }
12610
12611 if (!dep->is_import ())
12612 worklist.safe_push (dep);
12613 }
12614
12615 dump (dumper::DEPEND)
12616 && dump ("%s on %s %C:%N found",
12617 ek == EK_REDIRECT ? "Redirect"
12618 : for_binding ? "Binding" : "Dependency",
12619 dep->entity_kind_name (), TREE_CODE (decl), decl);
12620
12621 return dep;
12622 }
12623
12624 /* DEP is a newly discovered dependency. Append it to current's
12625 depset. */
12626
12627 void
12628 depset::hash::add_dependency (depset *dep)
12629 {
12630 gcc_checking_assert (current && !is_key_order ());
12631 current->deps.safe_push (dep);
12632
12633 if (dep->is_internal () && !current->is_internal ())
12634 current->set_flag_bit<DB_REFS_INTERNAL_BIT> ();
12635
12636 if (current->get_entity_kind () == EK_USING
12637 && DECL_IMPLICIT_TYPEDEF_P (dep->get_entity ())
12638 && TREE_CODE (TREE_TYPE (dep->get_entity ())) == ENUMERAL_TYPE)
12639 {
12640 /* CURRENT is an unwrapped using-decl and DECL is an enum's
12641 implicit typedef. Is CURRENT a member of the enum? */
12642 tree c_decl = OVL_FUNCTION (current->get_entity ());
12643
12644 if (TREE_CODE (c_decl) == CONST_DECL
12645 && (current->deps[0]->get_entity ()
12646 == CP_DECL_CONTEXT (dep->get_entity ())))
12647 /* Make DECL depend on CURRENT. */
12648 dep->deps.safe_push (current);
12649 }
12650
12651 if (dep->is_unreached ())
12652 {
12653 /* The dependency is reachable now. */
12654 reached_unreached = true;
12655 dep->clear_flag_bit<DB_UNREACHED_BIT> ();
12656 dump (dumper::DEPEND)
12657 && dump ("Reaching unreached %s %C:%N", dep->entity_kind_name (),
12658 TREE_CODE (dep->get_entity ()), dep->get_entity ());
12659 }
12660 }
12661
12662 depset *
12663 depset::hash::add_dependency (tree decl, entity_kind ek)
12664 {
12665 depset *dep;
12666
12667 if (is_key_order ())
12668 {
12669 dep = find_dependency (decl);
12670 if (dep)
12671 {
12672 current->deps.safe_push (dep);
12673 dump (dumper::MERGE)
12674 && dump ("Key dependency on %s %C:%N found",
12675 dep->entity_kind_name (), TREE_CODE (decl), decl);
12676 }
12677 else
12678 {
12679 /* It's not a mergeable decl, look for it in the original
12680 table. */
12681 dep = chain->find_dependency (decl);
12682 gcc_checking_assert (dep);
12683 }
12684 }
12685 else
12686 {
12687 dep = make_dependency (decl, ek);
12688 if (dep->get_entity_kind () != EK_REDIRECT)
12689 add_dependency (dep);
12690 }
12691
12692 return dep;
12693 }
12694
12695 void
12696 depset::hash::add_namespace_context (depset *dep, tree ns)
12697 {
12698 depset *ns_dep = make_dependency (ns, depset::EK_NAMESPACE);
12699 dep->deps.safe_push (ns_dep);
12700
12701 /* Mark it as special if imported so we don't walk connect when
12702 SCCing. */
12703 if (!dep->is_binding () && ns_dep->is_import ())
12704 dep->set_special ();
12705 }
12706
12707 struct add_binding_data
12708 {
12709 tree ns;
12710 bitmap partitions;
12711 depset *binding;
12712 depset::hash *hash;
12713 bool met_namespace;
12714 };
12715
12716 bool
12717 depset::hash::add_binding_entity (tree decl, WMB_Flags flags, void *data_)
12718 {
12719 auto data = static_cast <add_binding_data *> (data_);
12720
12721 if (TREE_CODE (decl) != NAMESPACE_DECL || DECL_NAMESPACE_ALIAS (decl))
12722 {
12723 tree inner = decl;
12724
12725 if (TREE_CODE (inner) == CONST_DECL
12726 && TREE_CODE (DECL_CONTEXT (inner)) == ENUMERAL_TYPE)
12727 inner = TYPE_NAME (DECL_CONTEXT (inner));
12728 else if (TREE_CODE (inner) == TEMPLATE_DECL)
12729 inner = DECL_TEMPLATE_RESULT (inner);
12730
12731 if (!DECL_LANG_SPECIFIC (inner) || !DECL_MODULE_PURVIEW_P (inner))
12732 /* Ignore global module fragment entities. */
12733 return false;
12734
12735 if (VAR_OR_FUNCTION_DECL_P (inner)
12736 && DECL_THIS_STATIC (inner))
12737 {
12738 if (!header_module_p ())
12739 /* Ignore internal-linkage entitites. */
12740 return false;
12741 }
12742
12743 if ((TREE_CODE (decl) == VAR_DECL
12744 || TREE_CODE (decl) == TYPE_DECL)
12745 && DECL_TINFO_P (decl))
12746 /* Ignore TINFO things. */
12747 return false;
12748
12749 if (!(flags & WMB_Using) && CP_DECL_CONTEXT (decl) != data->ns)
12750 {
12751 /* A using that lost its wrapper or an unscoped enum
12752 constant. */
12753 flags = WMB_Flags (flags | WMB_Using);
12754 if (DECL_MODULE_EXPORT_P (TREE_CODE (decl) == CONST_DECL
12755 ? TYPE_NAME (TREE_TYPE (decl))
12756 : STRIP_TEMPLATE (decl)))
12757 flags = WMB_Flags (flags | WMB_Export);
12758 }
12759
12760 if (!data->binding)
12761 /* No binding to check. */;
12762 else if (flags & WMB_Using)
12763 {
12764 /* Look in the binding to see if we already have this
12765 using. */
12766 for (unsigned ix = data->binding->deps.length (); --ix;)
12767 {
12768 depset *d = data->binding->deps[ix];
12769 if (d->get_entity_kind () == EK_USING
12770 && OVL_FUNCTION (d->get_entity ()) == decl)
12771 {
12772 if (!(flags & WMB_Hidden))
12773 d->clear_hidden_binding ();
12774 if (flags & WMB_Export)
12775 OVL_EXPORT_P (d->get_entity ()) = true;
12776 return false;
12777 }
12778 }
12779 }
12780 else if (flags & WMB_Dups)
12781 {
12782 /* Look in the binding to see if we already have this decl. */
12783 for (unsigned ix = data->binding->deps.length (); --ix;)
12784 {
12785 depset *d = data->binding->deps[ix];
12786 if (d->get_entity () == decl)
12787 {
12788 if (!(flags & WMB_Hidden))
12789 d->clear_hidden_binding ();
12790 return false;
12791 }
12792 }
12793 }
12794
12795 /* We're adding something. */
12796 if (!data->binding)
12797 {
12798 data->binding = make_binding (data->ns, DECL_NAME (decl));
12799 data->hash->add_namespace_context (data->binding, data->ns);
12800
12801 depset **slot = data->hash->binding_slot (data->ns,
12802 DECL_NAME (decl), true);
12803 gcc_checking_assert (!*slot);
12804 *slot = data->binding;
12805 }
12806
12807 if (flags & WMB_Using)
12808 {
12809 decl = ovl_make (decl, NULL_TREE);
12810 if (flags & WMB_Export)
12811 OVL_EXPORT_P (decl) = true;
12812 }
12813
12814 depset *dep = data->hash->make_dependency
12815 (decl, flags & WMB_Using ? EK_USING : EK_FOR_BINDING);
12816 if (flags & WMB_Hidden)
12817 dep->set_hidden_binding ();
12818 data->binding->deps.safe_push (dep);
12819 /* Binding and contents are mutually dependent. */
12820 dep->deps.safe_push (data->binding);
12821
12822 return true;
12823 }
12824 else if (DECL_NAME (decl) && !data->met_namespace)
12825 {
12826 /* Namespace, walk exactly once. */
12827 gcc_checking_assert (TREE_PUBLIC (decl));
12828 data->met_namespace = true;
12829 if (data->hash->add_namespace_entities (decl, data->partitions)
12830 || DECL_MODULE_EXPORT_P (decl))
12831 {
12832 data->hash->make_dependency (decl, depset::EK_NAMESPACE);
12833 return true;
12834 }
12835 }
12836
12837 return false;
12838 }
12839
12840 /* Recursively find all the namespace bindings of NS.
12841 Add a depset for every binding that contains an export or
12842 module-linkage entity. Add a defining depset for every such decl
12843 that we need to write a definition. Such defining depsets depend
12844 on the binding depset. Returns true if we contain something
12845 explicitly exported. */
12846
12847 bool
12848 depset::hash::add_namespace_entities (tree ns, bitmap partitions)
12849 {
12850 dump () && dump ("Looking for writables in %N", ns);
12851 dump.indent ();
12852
12853 unsigned count = 0;
12854 add_binding_data data;
12855 data.ns = ns;
12856 data.partitions = partitions;
12857 data.hash = this;
12858
12859 hash_table<named_decl_hash>::iterator end
12860 (DECL_NAMESPACE_BINDINGS (ns)->end ());
12861 for (hash_table<named_decl_hash>::iterator iter
12862 (DECL_NAMESPACE_BINDINGS (ns)->begin ()); iter != end; ++iter)
12863 {
12864 data.binding = nullptr;
12865 data.met_namespace = false;
12866 if (walk_module_binding (*iter, partitions, add_binding_entity, &data))
12867 count++;
12868 }
12869
12870 if (count)
12871 dump () && dump ("Found %u entries", count);
12872 dump.outdent ();
12873
12874 return count != 0;
12875 }
12876
12877 void
12878 depset::hash::add_partial_entities (vec<tree, va_gc> *partial_classes)
12879 {
12880 for (unsigned ix = 0; ix != partial_classes->length (); ix++)
12881 {
12882 tree inner = (*partial_classes)[ix];
12883
12884 depset *dep = make_dependency (inner, depset::EK_DECL);
12885
12886 if (dep->get_entity_kind () == depset::EK_REDIRECT)
12887 /* We should have recorded the template as a partial
12888 specialization. */
12889 gcc_checking_assert (dep->deps[0]->get_entity_kind ()
12890 == depset::EK_PARTIAL);
12891 else
12892 /* It was an explicit specialization, not a partial one. */
12893 gcc_checking_assert (dep->get_entity_kind ()
12894 == depset::EK_SPECIALIZATION);
12895 }
12896 }
12897
12898 /* Add the members of imported classes that we defined in this TU.
12899 This will also include lazily created implicit member function
12900 declarations. (All others will be definitions.) */
12901
12902 void
12903 depset::hash::add_class_entities (vec<tree, va_gc> *class_members)
12904 {
12905 for (unsigned ix = 0; ix != class_members->length (); ix++)
12906 {
12907 tree defn = (*class_members)[ix];
12908 depset *dep = make_dependency (defn, EK_INNER_DECL);
12909
12910 if (dep->get_entity_kind () == EK_REDIRECT)
12911 dep = dep->deps[0];
12912
12913 /* Only non-instantiations need marking as members. */
12914 if (dep->get_entity_kind () == EK_DECL)
12915 dep->set_flag_bit <DB_IS_MEMBER_BIT> ();
12916 }
12917 }
12918
12919 /* We add the partial & explicit specializations, and the explicit
12920 instantiations. */
12921
12922 static void
12923 specialization_add (bool decl_p, spec_entry *entry, void *data_)
12924 {
12925 vec<spec_entry *> *data = reinterpret_cast <vec<spec_entry *> *> (data_);
12926
12927 if (!decl_p)
12928 {
12929 /* We exclusively use decls to locate things. Make sure there's
12930 no mismatch between the two specialization tables we keep.
12931 pt.c optimizes instantiation lookup using a complicated
12932 heuristic. We don't attempt to replicate that algorithm, but
12933 observe its behaviour and reproduce it upon read back. */
12934
12935 gcc_checking_assert (DECL_ALIAS_TEMPLATE_P (entry->tmpl)
12936 || TREE_CODE (entry->spec) == ENUMERAL_TYPE
12937 || DECL_CLASS_TEMPLATE_P (entry->tmpl));
12938
12939 /* Only alias templates can appear in both tables (and
12940 if they're in the type table they must also be in the decl table). */
12941 gcc_checking_assert
12942 (!match_mergeable_specialization (true, entry, false)
12943 == (decl_p || !DECL_ALIAS_TEMPLATE_P (entry->tmpl)));
12944 }
12945 else if (VAR_OR_FUNCTION_DECL_P (entry->spec))
12946 gcc_checking_assert (!DECL_LOCAL_DECL_P (entry->spec));
12947
12948 data->safe_push (entry);
12949 }
12950
12951 /* Arbitrary stable comparison. */
12952
12953 static int
12954 specialization_cmp (const void *a_, const void *b_)
12955 {
12956 const spec_entry *ea = *reinterpret_cast<const spec_entry *const *> (a_);
12957 const spec_entry *eb = *reinterpret_cast<const spec_entry *const *> (b_);
12958
12959 if (ea == eb)
12960 return 0;
12961
12962 tree a = ea->spec;
12963 tree b = eb->spec;
12964 if (TYPE_P (a))
12965 {
12966 a = TYPE_NAME (a);
12967 b = TYPE_NAME (b);
12968 }
12969
12970 if (a == b)
12971 /* This can happen with friend specializations. Just order by
12972 entry address. See note in depset_cmp. */
12973 return ea < eb ? -1 : +1;
12974
12975 return DECL_UID (a) < DECL_UID (b) ? -1 : +1;
12976 }
12977
12978 /* We add all kinds of specialializations. Implicit specializations
12979 should only streamed and walked if they are reachable from
12980 elsewhere. Hence the UNREACHED flag. This is making the
12981 assumption that it is cheaper to reinstantiate them on demand
12982 elsewhere, rather than stream them in when we instantiate their
12983 general template. Also, if we do stream them, we can only do that
12984 if they are not internal (which they can become if they themselves
12985 touch an internal entity?). */
12986
12987 void
12988 depset::hash::add_specializations (bool decl_p)
12989 {
12990 vec<spec_entry *> data;
12991 data.create (100);
12992 walk_specializations (decl_p, specialization_add, &data);
12993 data.qsort (specialization_cmp);
12994 while (data.length ())
12995 {
12996 spec_entry *entry = data.pop ();
12997 tree spec = entry->spec;
12998 int use_tpl = 0;
12999 bool is_alias = false;
13000 bool is_friend = false;
13001
13002 if (decl_p && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (entry->tmpl))
13003 /* A friend of a template. This is keyed to the
13004 instantiation. */
13005 is_friend = true;
13006
13007 if (!decl_p && DECL_ALIAS_TEMPLATE_P (entry->tmpl))
13008 {
13009 spec = TYPE_NAME (spec);
13010 is_alias = true;
13011 }
13012
13013 if (decl_p || is_alias)
13014 {
13015 if (tree ti = DECL_TEMPLATE_INFO (spec))
13016 {
13017 tree tmpl = TI_TEMPLATE (ti);
13018
13019 use_tpl = DECL_USE_TEMPLATE (spec);
13020 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13021 {
13022 spec = tmpl;
13023 gcc_checking_assert (DECL_USE_TEMPLATE (spec) == use_tpl);
13024 }
13025 else if (is_friend)
13026 {
13027 if (TI_TEMPLATE (ti) != entry->tmpl
13028 || !template_args_equal (TI_ARGS (ti), entry->tmpl))
13029 goto template_friend;
13030 }
13031 }
13032 else
13033 {
13034 template_friend:;
13035 gcc_checking_assert (is_friend);
13036 /* This is a friend of a template class, but not the one
13037 that generated entry->spec itself (i.e. it's an
13038 equivalent clone). We do not need to record
13039 this. */
13040 continue;
13041 }
13042 }
13043 else
13044 {
13045 if (TREE_CODE (spec) == ENUMERAL_TYPE)
13046 {
13047 tree ctx = DECL_CONTEXT (TYPE_NAME (spec));
13048
13049 if (TYPE_P (ctx))
13050 use_tpl = CLASSTYPE_USE_TEMPLATE (ctx);
13051 else
13052 use_tpl = DECL_USE_TEMPLATE (ctx);
13053 }
13054 else
13055 use_tpl = CLASSTYPE_USE_TEMPLATE (spec);
13056
13057 tree ti = TYPE_TEMPLATE_INFO (spec);
13058 tree tmpl = TI_TEMPLATE (ti);
13059
13060 spec = TYPE_NAME (spec);
13061 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13062 {
13063 spec = tmpl;
13064 use_tpl = DECL_USE_TEMPLATE (spec);
13065 }
13066 }
13067
13068 bool needs_reaching = false;
13069 if (use_tpl == 1)
13070 /* Implicit instantiations only walked if we reach them. */
13071 needs_reaching = true;
13072 else if (!DECL_LANG_SPECIFIC (spec)
13073 || !DECL_MODULE_PURVIEW_P (spec))
13074 /* Likewise, GMF explicit or partial specializations. */
13075 needs_reaching = true;
13076
13077 #if false && CHECKING_P
13078 /* The instantiation isn't always on
13079 DECL_TEMPLATE_INSTANTIATIONS, */
13080 // FIXME: we probably need to remember this information?
13081 /* Verify the specialization is on the
13082 DECL_TEMPLATE_INSTANTIATIONS of the template. */
13083 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (entry->tmpl);
13084 cons; cons = TREE_CHAIN (cons))
13085 if (TREE_VALUE (cons) == entry->spec)
13086 {
13087 gcc_assert (entry->args == TREE_PURPOSE (cons));
13088 goto have_spec;
13089 }
13090 gcc_unreachable ();
13091 have_spec:;
13092 #endif
13093
13094 depset *dep = make_dependency (spec, depset::EK_SPECIALIZATION);
13095 if (dep->is_special ())
13096 {
13097 /* An already located specialization, this must be the TYPE
13098 corresponding to an alias_decl we found in the decl
13099 table. */
13100 spec_entry *other = reinterpret_cast <spec_entry *> (dep->deps[0]);
13101 gcc_checking_assert (!decl_p && is_alias && !dep->is_type_spec ());
13102 gcc_checking_assert (other->tmpl == entry->tmpl
13103 && template_args_equal (other->args, entry->args)
13104 && TREE_TYPE (other->spec) == entry->spec);
13105 dep->set_flag_bit<DB_ALIAS_SPEC_BIT> ();
13106 }
13107 else
13108 {
13109 gcc_checking_assert (decl_p || !is_alias);
13110 if (dep->get_entity_kind () == depset::EK_REDIRECT)
13111 dep = dep->deps[0];
13112 else if (dep->get_entity_kind () == depset::EK_SPECIALIZATION)
13113 {
13114 dep->set_special ();
13115 dep->deps.safe_push (reinterpret_cast<depset *> (entry));
13116 if (!decl_p)
13117 dep->set_flag_bit<DB_TYPE_SPEC_BIT> ();
13118 }
13119
13120 if (needs_reaching)
13121 dep->set_flag_bit<DB_UNREACHED_BIT> ();
13122 if (is_friend)
13123 dep->set_flag_bit<DB_FRIEND_SPEC_BIT> ();
13124 }
13125 }
13126 data.release ();
13127 }
13128
13129 /* Add a depset into the mergeable hash. */
13130
13131 void
13132 depset::hash::add_mergeable (depset *mergeable)
13133 {
13134 gcc_checking_assert (is_key_order ());
13135 entity_kind ek = mergeable->get_entity_kind ();
13136 tree decl = mergeable->get_entity ();
13137 gcc_checking_assert (ek < EK_DIRECT_HWM);
13138
13139 depset **slot = entity_slot (decl, true);
13140 gcc_checking_assert (!*slot);
13141 depset *dep = make_entity (decl, ek);
13142 *slot = dep;
13143
13144 worklist.safe_push (dep);
13145
13146 /* So we can locate the mergeable depset this depset refers to,
13147 mark the first dep. */
13148 dep->set_special ();
13149 dep->deps.safe_push (mergeable);
13150 }
13151
13152 /* Iteratively find dependencies. During the walk we may find more
13153 entries on the same binding that need walking. */
13154
13155 void
13156 depset::hash::find_dependencies ()
13157 {
13158 trees_out walker (NULL, NULL, *this);
13159 vec<depset *> unreached;
13160 unreached.create (worklist.length ());
13161
13162 for (;;)
13163 {
13164 reached_unreached = false;
13165 while (worklist.length ())
13166 {
13167 depset *item = worklist.pop ();
13168
13169 gcc_checking_assert (!item->is_binding ());
13170 if (item->is_unreached ())
13171 unreached.quick_push (item);
13172 else
13173 {
13174 current = item;
13175 tree decl = current->get_entity ();
13176 dump (is_key_order () ? dumper::MERGE : dumper::DEPEND)
13177 && dump ("Dependencies of %s %C:%N",
13178 is_key_order () ? "key-order"
13179 : current->entity_kind_name (), TREE_CODE (decl), decl);
13180 dump.indent ();
13181 walker.begin ();
13182 if (current->get_entity_kind () == EK_USING)
13183 walker.tree_node (OVL_FUNCTION (decl));
13184 else if (TREE_VISITED (decl))
13185 /* A global tree. */;
13186 else if (TREE_CODE (decl) == NAMESPACE_DECL
13187 && !DECL_NAMESPACE_ALIAS (decl))
13188 add_namespace_context (current, CP_DECL_CONTEXT (decl));
13189 else
13190 {
13191 walker.mark_declaration (decl, current->has_defn ());
13192
13193 // FIXME: Perhaps p1815 makes this redundant? Or at
13194 // least simplifies it. Voldemort types are only
13195 // ever emissable when containing (inline) function
13196 // definition is emitted?
13197 /* Turn the Sneakoscope on when depending the decl. */
13198 sneakoscope = true;
13199 walker.decl_value (decl, current);
13200 sneakoscope = false;
13201 if (current->has_defn ())
13202 walker.write_definition (decl);
13203 }
13204 walker.end ();
13205
13206 if (!walker.is_key_order ()
13207 && TREE_CODE (decl) == TEMPLATE_DECL
13208 && !DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
13209 /* Mark all the explicit & partial specializations as
13210 reachable. */
13211 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (decl);
13212 cons; cons = TREE_CHAIN (cons))
13213 {
13214 tree spec = TREE_VALUE (cons);
13215 if (TYPE_P (spec))
13216 spec = TYPE_NAME (spec);
13217 int use_tpl;
13218 node_template_info (spec, use_tpl);
13219 if (use_tpl & 2)
13220 {
13221 depset *spec_dep = find_dependency (spec);
13222 if (spec_dep->get_entity_kind () == EK_REDIRECT)
13223 spec_dep = spec_dep->deps[0];
13224 if (spec_dep->is_unreached ())
13225 {
13226 reached_unreached = true;
13227 spec_dep->clear_flag_bit<DB_UNREACHED_BIT> ();
13228 dump (dumper::DEPEND)
13229 && dump ("Reaching unreached specialization"
13230 " %C:%N", TREE_CODE (spec), spec);
13231 }
13232 }
13233 }
13234
13235 dump.outdent ();
13236 current = NULL;
13237 }
13238 }
13239
13240 if (!reached_unreached)
13241 break;
13242
13243 /* It's possible the we reached the unreached before we
13244 processed it in the above loop, so we'll be doing this an
13245 extra time. However, to avoid that we have to do some
13246 bit shuffling that also involves a scan of the list.
13247 Swings & roundabouts I guess. */
13248 std::swap (worklist, unreached);
13249 }
13250
13251 unreached.release ();
13252 }
13253
13254 /* Compare two entries of a single binding. TYPE_DECL before
13255 non-exported before exported. */
13256
13257 static int
13258 binding_cmp (const void *a_, const void *b_)
13259 {
13260 depset *a = *(depset *const *)a_;
13261 depset *b = *(depset *const *)b_;
13262
13263 tree a_ent = a->get_entity ();
13264 tree b_ent = b->get_entity ();
13265 gcc_checking_assert (a_ent != b_ent
13266 && !a->is_binding ()
13267 && !b->is_binding ());
13268
13269 /* Implicit typedefs come first. */
13270 bool a_implicit = DECL_IMPLICIT_TYPEDEF_P (a_ent);
13271 bool b_implicit = DECL_IMPLICIT_TYPEDEF_P (b_ent);
13272 if (a_implicit || b_implicit)
13273 {
13274 /* A binding with two implicit type decls? That's unpossible! */
13275 gcc_checking_assert (!(a_implicit && b_implicit));
13276 return a_implicit ? -1 : +1; /* Implicit first. */
13277 }
13278
13279 /* Hidden before non-hidden. */
13280 bool a_hidden = a->is_hidden ();
13281 bool b_hidden = b->is_hidden ();
13282 if (a_hidden != b_hidden)
13283 return a_hidden ? -1 : +1;
13284
13285 bool a_using = a->get_entity_kind () == depset::EK_USING;
13286 bool a_export;
13287 if (a_using)
13288 {
13289 a_export = OVL_EXPORT_P (a_ent);
13290 a_ent = OVL_FUNCTION (a_ent);
13291 }
13292 else
13293 a_export = DECL_MODULE_EXPORT_P (TREE_CODE (a_ent) == CONST_DECL
13294 ? TYPE_NAME (TREE_TYPE (a_ent))
13295 : STRIP_TEMPLATE (a_ent));
13296
13297 bool b_using = b->get_entity_kind () == depset::EK_USING;
13298 bool b_export;
13299 if (b_using)
13300 {
13301 b_export = OVL_EXPORT_P (b_ent);
13302 b_ent = OVL_FUNCTION (b_ent);
13303 }
13304 else
13305 b_export = DECL_MODULE_EXPORT_P (TREE_CODE (b_ent) == CONST_DECL
13306 ? TYPE_NAME (TREE_TYPE (b_ent))
13307 : STRIP_TEMPLATE (b_ent));
13308
13309 /* Non-exports before exports. */
13310 if (a_export != b_export)
13311 return a_export ? +1 : -1;
13312
13313 /* At this point we don't care, but want a stable sort. */
13314
13315 if (a_using != b_using)
13316 /* using first. */
13317 return a_using? -1 : +1;
13318
13319 return DECL_UID (a_ent) < DECL_UID (b_ent) ? -1 : +1;
13320 }
13321
13322 /* Sort the bindings, issue errors about bad internal refs. */
13323
13324 bool
13325 depset::hash::finalize_dependencies ()
13326 {
13327 bool ok = true;
13328 depset::hash::iterator end (this->end ());
13329 for (depset::hash::iterator iter (begin ()); iter != end; ++iter)
13330 {
13331 depset *dep = *iter;
13332 if (dep->is_binding ())
13333 {
13334 /* Keep the containing namespace dep first. */
13335 gcc_checking_assert (dep->deps.length () > 1
13336 && (dep->deps[0]->get_entity_kind ()
13337 == EK_NAMESPACE)
13338 && (dep->deps[0]->get_entity ()
13339 == dep->get_entity ()));
13340 if (dep->deps.length () > 2)
13341 gcc_qsort (&dep->deps[1], dep->deps.length () - 1,
13342 sizeof (dep->deps[1]), binding_cmp);
13343 }
13344 else if (dep->refs_internal ())
13345 {
13346 for (unsigned ix = dep->deps.length (); ix--;)
13347 {
13348 depset *rdep = dep->deps[ix];
13349 if (rdep->is_internal ())
13350 {
13351 // FIXME:QOI Better location information? We're
13352 // losing, so it doesn't matter about efficiency
13353 tree decl = dep->get_entity ();
13354 error_at (DECL_SOURCE_LOCATION (decl),
13355 "%q#D references internal linkage entity %q#D",
13356 decl, rdep->get_entity ());
13357 break;
13358 }
13359 }
13360 ok = false;
13361 }
13362 }
13363
13364 return ok;
13365 }
13366
13367 /* Core of TARJAN's algorithm to find Strongly Connected Components
13368 within a graph. See https://en.wikipedia.org/wiki/
13369 Tarjan%27s_strongly_connected_components_algorithm for details.
13370
13371 We use depset::section as lowlink. Completed nodes have
13372 depset::cluster containing the cluster number, with the top
13373 bit set.
13374
13375 A useful property is that the output vector is a reverse
13376 topological sort of the resulting DAG. In our case that means
13377 dependent SCCs are found before their dependers. We make use of
13378 that property. */
13379
13380 void
13381 depset::tarjan::connect (depset *v)
13382 {
13383 gcc_checking_assert (v->is_binding ()
13384 || !(v->is_unreached () || v->is_import ()));
13385
13386 v->cluster = v->section = ++index;
13387 stack.safe_push (v);
13388
13389 /* Walk all our dependencies, ignore a first marked slot */
13390 for (unsigned ix = v->is_special (); ix != v->deps.length (); ix++)
13391 {
13392 depset *dep = v->deps[ix];
13393
13394 if (dep->is_binding () || !dep->is_import ())
13395 {
13396 unsigned lwm = dep->cluster;
13397
13398 if (!dep->cluster)
13399 {
13400 /* A new node. Connect it. */
13401 connect (dep);
13402 lwm = dep->section;
13403 }
13404
13405 if (dep->section && v->section > lwm)
13406 v->section = lwm;
13407 }
13408 }
13409
13410 if (v->section == v->cluster)
13411 {
13412 /* Root of a new SCC. Push all the members onto the result list. */
13413 unsigned num = v->cluster;
13414 depset *p;
13415 do
13416 {
13417 p = stack.pop ();
13418 p->cluster = num;
13419 p->section = 0;
13420 result.quick_push (p);
13421 }
13422 while (p != v);
13423 }
13424 }
13425
13426 /* Compare two depsets. The specific ordering is unimportant, we're
13427 just trying to get consistency. */
13428
13429 static int
13430 depset_cmp (const void *a_, const void *b_)
13431 {
13432 depset *a = *(depset *const *)a_;
13433 depset *b = *(depset *const *)b_;
13434
13435 depset::entity_kind a_kind = a->get_entity_kind ();
13436 depset::entity_kind b_kind = b->get_entity_kind ();
13437
13438 if (a_kind != b_kind)
13439 /* Different entity kinds, order by that. */
13440 return a_kind < b_kind ? -1 : +1;
13441
13442 tree a_decl = a->get_entity ();
13443 tree b_decl = b->get_entity ();
13444 if (a_kind == depset::EK_USING)
13445 {
13446 /* If one is a using, the other must be too. */
13447 a_decl = OVL_FUNCTION (a_decl);
13448 b_decl = OVL_FUNCTION (b_decl);
13449 }
13450
13451 if (a_decl != b_decl)
13452 /* Different entities, order by their UID. */
13453 return DECL_UID (a_decl) < DECL_UID (b_decl) ? -1 : +1;
13454
13455 if (a_kind == depset::EK_BINDING)
13456 {
13457 /* Both are bindings. Order by identifier hash. */
13458 gcc_checking_assert (a->get_name () != b->get_name ());
13459 return (IDENTIFIER_HASH_VALUE (a->get_name ())
13460 < IDENTIFIER_HASH_VALUE (b->get_name ())
13461 ? -1 : +1);
13462 }
13463
13464 /* They are the same decl. This can happen with two using decls
13465 pointing to the same target. The best we can aim for is
13466 consistently telling qsort how to order them. Hopefully we'll
13467 never have to debug a case that depends on this. Oh, who am I
13468 kidding? Good luck. */
13469 gcc_checking_assert (a_kind == depset::EK_USING);
13470
13471 /* Order by depset address. Not the best, but it is something. */
13472 return a < b ? -1 : +1;
13473 }
13474
13475 /* Sort the clusters in SCC such that those that depend on one another
13476 are placed later. */
13477
13478 // FIXME: I am not convinced this is needed and, if needed,
13479 // sufficient. We emit the decls in this order but that emission
13480 // could walk into later decls (from the body of the decl, or default
13481 // arg-like things). Why doesn't that walk do the right thing? And
13482 // if it DTRT why do we need to sort here -- won't things naturally
13483 // work? I think part of the issue is that when we're going to refer
13484 // to an entity by name, and that entity is in the same cluster as us,
13485 // we need to actually walk that entity, if we've not already walked
13486 // it.
13487 static void
13488 sort_cluster (depset::hash *original, depset *scc[], unsigned size)
13489 {
13490 depset::hash table (size, original);
13491
13492 dump.indent ();
13493
13494 /* Place bindings last, usings before that. It's not strictly
13495 necessary, but it does make things neater. Says Mr OCD. */
13496 unsigned bind_lwm = size;
13497 unsigned use_lwm = size;
13498 for (unsigned ix = 0; ix != use_lwm;)
13499 {
13500 depset *dep = scc[ix];
13501 switch (dep->get_entity_kind ())
13502 {
13503 case depset::EK_BINDING:
13504 /* Move to end. No increment. Notice this could be moving
13505 a using decl, which we'll then move again. */
13506 if (--bind_lwm != ix)
13507 {
13508 scc[ix] = scc[bind_lwm];
13509 scc[bind_lwm] = dep;
13510 }
13511 if (use_lwm > bind_lwm)
13512 {
13513 use_lwm--;
13514 break;
13515 }
13516 /* We must have copied a using, so move it too. */
13517 dep = scc[ix];
13518 gcc_checking_assert (dep->get_entity_kind () == depset::EK_USING);
13519 /* FALLTHROUGH */
13520
13521 case depset::EK_USING:
13522 if (--use_lwm != ix)
13523 {
13524 scc[ix] = scc[use_lwm];
13525 scc[use_lwm] = dep;
13526 }
13527 break;
13528
13529 case depset::EK_DECL:
13530 case depset::EK_SPECIALIZATION:
13531 case depset::EK_PARTIAL:
13532 table.add_mergeable (dep);
13533 ix++;
13534 break;
13535
13536 default:
13537 gcc_unreachable ();
13538 }
13539 }
13540
13541 gcc_checking_assert (use_lwm <= bind_lwm);
13542 dump (dumper::MERGE) && dump ("Ordering %u/%u depsets", use_lwm, size);
13543
13544 table.find_dependencies ();
13545
13546 vec<depset *> order = table.connect ();
13547 gcc_checking_assert (order.length () == use_lwm);
13548
13549 /* Now rewrite entries [0,lwm), in the dependency order we
13550 discovered. Usually each entity is in its own cluster. Rarely,
13551 we can get multi-entity clusters, in which case all but one must
13552 only be reached from within the cluster. This happens for
13553 something like:
13554
13555 template<typename T>
13556 auto Foo (const T &arg) -> TPL<decltype (arg)>;
13557
13558 The instantiation of TPL will be in the specialization table, and
13559 refer to Foo via arg. But we can only get to that specialization
13560 from Foo's declaration, so we only need to treat Foo as mergable
13561 (We'll do structural comparison of TPL<decltype (arg)>).
13562
13563 Finding the single cluster entry dep is very tricky and
13564 expensive. Let's just not do that. It's harmless in this case
13565 anyway. */
13566 unsigned pos = 0;
13567 unsigned cluster = ~0u;
13568 for (unsigned ix = 0; ix != order.length (); ix++)
13569 {
13570 gcc_checking_assert (order[ix]->is_special ());
13571 depset *dep = order[ix]->deps[0];
13572 scc[pos++] = dep;
13573 dump (dumper::MERGE)
13574 && dump ("Mergeable %u is %N%s", ix, dep->get_entity (),
13575 order[ix]->cluster == cluster ? " (tight)" : "");
13576 cluster = order[ix]->cluster;
13577 }
13578
13579 gcc_checking_assert (pos == use_lwm);
13580
13581 order.release ();
13582 dump (dumper::MERGE) && dump ("Ordered %u keys", pos);
13583 dump.outdent ();
13584 }
13585
13586 /* Reduce graph to SCCS clusters. SCCS will be populated with the
13587 depsets in dependency order. Each depset's CLUSTER field contains
13588 its cluster number. Each SCC has a unique cluster number, and are
13589 contiguous in SCCS. Cluster numbers are otherwise arbitrary. */
13590
13591 vec<depset *>
13592 depset::hash::connect ()
13593 {
13594 tarjan connector (size ());
13595 vec<depset *> deps;
13596 deps.create (size ());
13597 iterator end (this->end ());
13598 for (iterator iter (begin ()); iter != end; ++iter)
13599 {
13600 depset *item = *iter;
13601
13602 entity_kind kind = item->get_entity_kind ();
13603 if (kind == EK_BINDING
13604 || !(kind == EK_REDIRECT
13605 || item->is_unreached ()
13606 || item->is_import ()))
13607 deps.quick_push (item);
13608 }
13609
13610 /* Iteration over the hash table is an unspecified ordering. While
13611 that has advantages, it causes 2 problems. Firstly repeatable
13612 builds are tricky. Secondly creating testcases that check
13613 dependencies are correct by making sure a bad ordering would
13614 happen if that was wrong. */
13615 deps.qsort (depset_cmp);
13616
13617 while (deps.length ())
13618 {
13619 depset *v = deps.pop ();
13620 dump (dumper::CLUSTER) &&
13621 (v->is_binding ()
13622 ? dump ("Connecting binding %P", v->get_entity (), v->get_name ())
13623 : dump ("Connecting %s %s %C:%N",
13624 is_key_order () ? "key-order"
13625 : !v->has_defn () ? "declaration" : "definition",
13626 v->entity_kind_name (), TREE_CODE (v->get_entity ()),
13627 v->get_entity ()));
13628 if (!v->cluster)
13629 connector.connect (v);
13630 }
13631
13632 deps.release ();
13633 return connector.result;
13634 }
13635
13636 /* Load the entities referred to by this pendset. */
13637
13638 static bool
13639 pendset_lazy_load (pendset *pendings, bool specializations_p)
13640 {
13641 bool ok = true;
13642
13643 for (unsigned ix = 0; ok && ix != pendings->num; ix++)
13644 {
13645 unsigned index = pendings->values[ix];
13646 if (index & ~(~0u >> 1))
13647 {
13648 /* An indirection. */
13649 if (specializations_p)
13650 index = ~index;
13651 pendset *other = pending_table->get (index, true);
13652 if (!pendset_lazy_load (other, specializations_p))
13653 ok = false;
13654 }
13655 else
13656 {
13657 module_state *module = import_entity_module (index);
13658 binding_slot *slot = &(*entity_ary)[index];
13659 if (!slot->is_lazy ())
13660 dump () && dump ("Specialiation %M[%u] already loaded",
13661 module, index - module->entity_lwm);
13662 else if (!module->lazy_load (index - module->entity_lwm, slot))
13663 ok = false;
13664 }
13665 }
13666
13667 /* We own set, so delete it now. */
13668 delete pendings;
13669
13670 return ok;
13671 }
13672
13673 /* Initialize location spans. */
13674
13675 void
13676 loc_spans::init (const line_maps *lmaps, const line_map_ordinary *map)
13677 {
13678 gcc_checking_assert (!init_p ());
13679 spans = new vec<span> ();
13680 spans->reserve (20);
13681
13682 span interval;
13683 interval.ordinary.first = 0;
13684 interval.macro.second = MAX_LOCATION_T + 1;
13685 interval.ordinary_delta = interval.macro_delta = 0;
13686
13687 /* A span for reserved fixed locs. */
13688 interval.ordinary.second
13689 = MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0));
13690 interval.macro.first = interval.macro.second;
13691 dump (dumper::LOCATION)
13692 && dump ("Fixed span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
13693 interval.ordinary.first, interval.ordinary.second,
13694 interval.macro.first, interval.macro.second);
13695 spans->quick_push (interval);
13696
13697 /* A span for command line & forced headers. */
13698 interval.ordinary.first = interval.ordinary.second;
13699 interval.macro.second = interval.macro.first;
13700 if (map)
13701 {
13702 interval.ordinary.second = map->start_location;
13703 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (lmaps);
13704 }
13705 dump (dumper::LOCATION)
13706 && dump ("Pre span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
13707 interval.ordinary.first, interval.ordinary.second,
13708 interval.macro.first, interval.macro.second);
13709 spans->quick_push (interval);
13710
13711 /* Start an interval for the main file. */
13712 interval.ordinary.first = interval.ordinary.second;
13713 interval.macro.second = interval.macro.first;
13714 dump (dumper::LOCATION)
13715 && dump ("Main span %u ordinary:[%u,*) macro:[*,%u)", spans->length (),
13716 interval.ordinary.first, interval.macro.second);
13717 spans->quick_push (interval);
13718 }
13719
13720 /* Reopen the span, if we want the about-to-be-inserted set of maps to
13721 be propagated in our own location table. I.e. we are the primary
13722 interface and we're importing a partition. */
13723
13724 bool
13725 loc_spans::maybe_propagate (module_state *import,
13726 location_t loc = UNKNOWN_LOCATION)
13727 {
13728 bool opened = (module_interface_p () && !module_partition_p ()
13729 && import->is_partition ());
13730 if (opened)
13731 open (loc);
13732 return opened;
13733 }
13734
13735 /* Open a new linemap interval. The just-created ordinary map is the
13736 first map of the interval. */
13737
13738 void
13739 loc_spans::open (location_t hwm = UNKNOWN_LOCATION)
13740 {
13741 if (hwm == UNKNOWN_LOCATION)
13742 hwm = MAP_START_LOCATION (LINEMAPS_LAST_ORDINARY_MAP (line_table));
13743
13744 span interval;
13745 interval.ordinary.first = interval.ordinary.second = hwm;
13746 interval.macro.first = interval.macro.second
13747 = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
13748 interval.ordinary_delta = interval.macro_delta = 0;
13749 dump (dumper::LOCATION)
13750 && dump ("Opening span %u ordinary:[%u,... macro:...,%u)",
13751 spans->length (), interval.ordinary.first,
13752 interval.macro.second);
13753 spans->safe_push (interval);
13754 }
13755
13756 /* Close out the current linemap interval. The last maps are within
13757 the interval. */
13758
13759 void
13760 loc_spans::close ()
13761 {
13762 span &interval = spans->last ();
13763
13764 interval.ordinary.second
13765 = ((line_table->highest_location + (1 << line_table->default_range_bits))
13766 & ~((1u << line_table->default_range_bits) - 1));
13767 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
13768 dump (dumper::LOCATION)
13769 && dump ("Closing span %u ordinary:[%u,%u) macro:[%u,%u)",
13770 spans->length () - 1,
13771 interval.ordinary.first,interval.ordinary.second,
13772 interval.macro.first, interval.macro.second);
13773 }
13774
13775 /* Given an ordinary location LOC, return the lmap_interval it resides
13776 in. NULL if it is not in an interval. */
13777
13778 const loc_spans::span *
13779 loc_spans::ordinary (location_t loc)
13780 {
13781 unsigned len = spans->length ();
13782 unsigned pos = 0;
13783 while (len)
13784 {
13785 unsigned half = len / 2;
13786 const span &probe = (*spans)[pos + half];
13787 if (loc < probe.ordinary.first)
13788 len = half;
13789 else if (loc < probe.ordinary.second)
13790 return &probe;
13791 else
13792 {
13793 pos += half + 1;
13794 len = len - (half + 1);
13795 }
13796 }
13797 return NULL;
13798 }
13799
13800 /* Likewise, given a macro location LOC, return the lmap interval it
13801 resides in. */
13802
13803 const loc_spans::span *
13804 loc_spans::macro (location_t loc)
13805 {
13806 unsigned len = spans->length ();
13807 unsigned pos = 0;
13808 while (len)
13809 {
13810 unsigned half = len / 2;
13811 const span &probe = (*spans)[pos + half];
13812 if (loc >= probe.macro.second)
13813 len = half;
13814 else if (loc >= probe.macro.first)
13815 return &probe;
13816 else
13817 {
13818 pos += half + 1;
13819 len = len - (half + 1);
13820 }
13821 }
13822 return NULL;
13823 }
13824
13825 /* Return the ordinary location closest to FROM. */
13826
13827 static location_t
13828 ordinary_loc_of (line_maps *lmaps, location_t from)
13829 {
13830 while (!IS_ORDINARY_LOC (from))
13831 {
13832 if (IS_ADHOC_LOC (from))
13833 from = get_location_from_adhoc_loc (lmaps, from);
13834 if (IS_MACRO_LOC (from))
13835 {
13836 /* Find the ordinary location nearest FROM. */
13837 const line_map *map = linemap_lookup (lmaps, from);
13838 const line_map_macro *mac_map = linemap_check_macro (map);
13839 from = MACRO_MAP_EXPANSION_POINT_LOCATION (mac_map);
13840 }
13841 }
13842 return from;
13843 }
13844
13845 static module_state **
13846 get_module_slot (tree name, module_state *parent, bool partition, bool insert)
13847 {
13848 module_state_hash::compare_type ct (name, uintptr_t (parent) | partition);
13849 hashval_t hv = module_state_hash::hash (ct);
13850
13851 return modules_hash->find_slot_with_hash (ct, hv, insert ? INSERT : NO_INSERT);
13852 }
13853
13854 static module_state *
13855 get_primary (module_state *parent)
13856 {
13857 while (parent->is_partition ())
13858 parent = parent->parent;
13859
13860 if (!parent->name)
13861 // Implementation unit has null name
13862 parent = parent->parent;
13863
13864 return parent;
13865 }
13866
13867 /* Find or create module NAME & PARENT in the hash table. */
13868
13869 module_state *
13870 get_module (tree name, module_state *parent, bool partition)
13871 {
13872 if (partition)
13873 {
13874 if (!parent)
13875 parent = get_primary ((*modules)[0]);
13876
13877 if (!parent->is_partition () && !parent->flatname)
13878 parent->set_flatname ();
13879 }
13880
13881 module_state **slot = get_module_slot (name, parent, partition, true);
13882 module_state *state = *slot;
13883 if (!state)
13884 {
13885 state = (new (ggc_alloc<module_state> ())
13886 module_state (name, parent, partition));
13887 *slot = state;
13888 }
13889 return state;
13890 }
13891
13892 /* Process string name PTR into a module_state. */
13893
13894 static module_state *
13895 get_module (const char *ptr)
13896 {
13897 if (ptr[0] == '.' ? IS_DIR_SEPARATOR (ptr[1]) : IS_ABSOLUTE_PATH (ptr))
13898 /* A header name. */
13899 return get_module (build_string (strlen (ptr), ptr));
13900
13901 bool partition = false;
13902 module_state *mod = NULL;
13903
13904 for (const char *probe = ptr;; probe++)
13905 if (!*probe || *probe == '.' || *probe == ':')
13906 {
13907 if (probe == ptr)
13908 return NULL;
13909
13910 mod = get_module (get_identifier_with_length (ptr, probe - ptr),
13911 mod, partition);
13912 ptr = probe;
13913 if (*ptr == ':')
13914 {
13915 if (partition)
13916 return NULL;
13917 partition = true;
13918 }
13919
13920 if (!*ptr++)
13921 break;
13922 }
13923 else if (!(ISALPHA (*probe) || *probe == '_'
13924 || (probe != ptr && ISDIGIT (*probe))))
13925 return NULL;
13926
13927 return mod;
13928 }
13929
13930 /* Create a new mapper connecting to OPTION. */
13931
13932 module_client *
13933 make_mapper (location_t loc)
13934 {
13935 timevar_start (TV_MODULE_MAPPER);
13936 const char *option = module_mapper_name;
13937 if (!option)
13938 option = getenv ("CXX_MODULE_MAPPER");
13939
13940 mapper = module_client::open_module_client
13941 (loc, option, &set_cmi_repo,
13942 (save_decoded_options[0].opt_index == OPT_SPECIAL_program_name)
13943 && save_decoded_options[0].arg != progname
13944 ? save_decoded_options[0].arg : nullptr);
13945
13946 timevar_stop (TV_MODULE_MAPPER);
13947
13948 return mapper;
13949 }
13950
13951 /* If THIS is the current purview, issue an import error and return false. */
13952
13953 bool
13954 module_state::check_not_purview (location_t from)
13955 {
13956 module_state *imp = (*modules)[0];
13957 if (imp && !imp->name)
13958 imp = imp->parent;
13959 if (imp == this)
13960 {
13961 /* Cannot import the current module. */
13962 error_at (from, "cannot import module in its own purview");
13963 inform (loc, "module %qs declared here", get_flatname ());
13964 return false;
13965 }
13966 return true;
13967 }
13968
13969 /* Module name substitutions. */
13970 static vec<module_state *,va_heap> substs;
13971
13972 void
13973 module_state::mangle (bool include_partition)
13974 {
13975 if (subst)
13976 mangle_module_substitution (subst - 1);
13977 else
13978 {
13979 if (parent)
13980 parent->mangle (include_partition);
13981 if (include_partition || !is_partition ())
13982 {
13983 char p = 0;
13984 // Partitions are significant for global initializer functions
13985 if (is_partition () && !parent->is_partition ())
13986 p = 'P';
13987 substs.safe_push (this);
13988 subst = substs.length ();
13989 mangle_identifier (p, name);
13990 }
13991 }
13992 }
13993
13994 void
13995 mangle_module (int mod, bool include_partition)
13996 {
13997 module_state *imp = (*modules)[mod];
13998
13999 if (!imp->name)
14000 /* Set when importing the primary module interface. */
14001 imp = imp->parent;
14002
14003 imp->mangle (include_partition);
14004 }
14005
14006 /* Clean up substitutions. */
14007 void
14008 mangle_module_fini ()
14009 {
14010 while (substs.length ())
14011 substs.pop ()->subst = 0;
14012 }
14013
14014 /* Announce WHAT about the module. */
14015
14016 void
14017 module_state::announce (const char *what) const
14018 {
14019 if (noisy_p ())
14020 {
14021 fprintf (stderr, " %s:%s", what, get_flatname ());
14022 fflush (stderr);
14023 }
14024 }
14025
14026 /* A human-readable README section. The contents of this section to
14027 not contribute to the CRC, so the contents can change per
14028 compilation. That allows us to embed CWD, hostname, build time and
14029 what not. It is a STRTAB that may be extracted with:
14030 readelf -pgnu.c++.README $(module).gcm */
14031
14032 void
14033 module_state::write_readme (elf_out *to, cpp_reader *reader,
14034 const char *dialect, unsigned extensions)
14035 {
14036 bytes_out readme (to);
14037
14038 readme.begin (false);
14039
14040 readme.printf ("GNU C++ %smodule%s%s",
14041 is_header () ? "header " : is_partition () ? "" : "primary ",
14042 is_header () ? ""
14043 : is_interface () ? " interface" : " implementation",
14044 is_partition () ? " partition" : "");
14045
14046 /* Compiler's version. */
14047 readme.printf ("compiler: %s", version_string);
14048
14049 /* Module format version. */
14050 verstr_t string;
14051 version2string (MODULE_VERSION, string);
14052 readme.printf ("version: %s", string);
14053
14054 /* Module information. */
14055 readme.printf ("module: %s", get_flatname ());
14056 readme.printf ("source: %s", main_input_filename);
14057 readme.printf ("dialect: %s", dialect);
14058 if (extensions)
14059 readme.printf ("extensions: %s",
14060 extensions & SE_OPENMP ? "-fopenmp" : "");
14061
14062 /* The following fields could be expected to change between
14063 otherwise identical compilations. Consider a distributed build
14064 system. We should have a way of overriding that. */
14065 if (char *cwd = getcwd (NULL, 0))
14066 {
14067 readme.printf ("cwd: %s", cwd);
14068 free (cwd);
14069 }
14070 readme.printf ("repository: %s", cmi_repo ? cmi_repo : ".");
14071 #if NETWORKING
14072 {
14073 char hostname[64];
14074 if (!gethostname (hostname, sizeof (hostname)))
14075 readme.printf ("host: %s", hostname);
14076 }
14077 #endif
14078 {
14079 /* This of course will change! */
14080 time_t stampy;
14081 auto kind = cpp_get_date (reader, &stampy);
14082 if (kind != CPP_time_kind::UNKNOWN)
14083 {
14084 struct tm *time;
14085
14086 time = gmtime (&stampy);
14087 readme.print_time ("build", time, "UTC");
14088
14089 if (kind == CPP_time_kind::DYNAMIC)
14090 {
14091 time = localtime (&stampy);
14092 readme.print_time ("local", time,
14093 #if defined (__USE_MISC) || defined (__USE_BSD) /* Is there a better way? */
14094 time->tm_zone
14095 #else
14096 ""
14097 #endif
14098 );
14099 }
14100 }
14101 }
14102
14103 /* Its direct imports. */
14104 for (unsigned ix = 1; ix < modules->length (); ix++)
14105 {
14106 module_state *state = (*modules)[ix];
14107
14108 if (state->is_direct ())
14109 readme.printf ("%s: %s %s", state->exported_p ? "export" : "import",
14110 state->get_flatname (), state->filename);
14111 }
14112
14113 readme.end (to, to->name (MOD_SNAME_PFX ".README"), NULL);
14114 }
14115
14116 /* Sort environment var names in reverse order. */
14117
14118 static int
14119 env_var_cmp (const void *a_, const void *b_)
14120 {
14121 const unsigned char *a = *(const unsigned char *const *)a_;
14122 const unsigned char *b = *(const unsigned char *const *)b_;
14123
14124 for (unsigned ix = 0; ; ix++)
14125 {
14126 bool a_end = !a[ix] || a[ix] == '=';
14127 if (a[ix] == b[ix])
14128 {
14129 if (a_end)
14130 break;
14131 }
14132 else
14133 {
14134 bool b_end = !b[ix] || b[ix] == '=';
14135
14136 if (!a_end && !b_end)
14137 return a[ix] < b[ix] ? +1 : -1;
14138 if (a_end && b_end)
14139 break;
14140 return a_end ? +1 : -1;
14141 }
14142 }
14143
14144 return 0;
14145 }
14146
14147 /* Write the environment. It is a STRTAB that may be extracted with:
14148 readelf -pgnu.c++.ENV $(module).gcm */
14149
14150 void
14151 module_state::write_env (elf_out *to)
14152 {
14153 vec<const char *> vars;
14154 vars.create (20);
14155
14156 extern char **environ;
14157 while (const char *var = environ[vars.length ()])
14158 vars.safe_push (var);
14159 vars.qsort (env_var_cmp);
14160
14161 bytes_out env (to);
14162 env.begin (false);
14163 while (vars.length ())
14164 env.printf ("%s", vars.pop ());
14165 env.end (to, to->name (MOD_SNAME_PFX ".ENV"), NULL);
14166
14167 vars.release ();
14168 }
14169
14170 /* Write the direct or indirect imports.
14171 u:N
14172 {
14173 u:index
14174 s:name
14175 u32:crc
14176 s:filename (direct)
14177 u:exported (direct)
14178 } imports[N]
14179 */
14180
14181 void
14182 module_state::write_imports (bytes_out &sec, bool direct)
14183 {
14184 unsigned count = 0;
14185
14186 for (unsigned ix = 1; ix < modules->length (); ix++)
14187 {
14188 module_state *imp = (*modules)[ix];
14189
14190 if (imp->remap && imp->is_direct () == direct)
14191 count++;
14192 }
14193
14194 gcc_assert (!direct || count);
14195
14196 sec.u (count);
14197 for (unsigned ix = 1; ix < modules->length (); ix++)
14198 {
14199 module_state *imp = (*modules)[ix];
14200
14201 if (imp->remap && imp->is_direct () == direct)
14202 {
14203 dump () && dump ("Writing %simport:%u->%u %M (crc=%x)",
14204 !direct ? "indirect "
14205 : imp->exported_p ? "exported " : "",
14206 ix, imp->remap, imp, imp->crc);
14207 sec.u (imp->remap);
14208 sec.str (imp->get_flatname ());
14209 sec.u32 (imp->crc);
14210 if (direct)
14211 {
14212 write_location (sec, imp->imported_from ());
14213 sec.str (imp->filename);
14214 int exportedness = 0;
14215 if (imp->exported_p)
14216 exportedness = +1;
14217 else if (!imp->is_purview_direct ())
14218 exportedness = -1;
14219 sec.i (exportedness);
14220 }
14221 }
14222 }
14223 }
14224
14225 /* READER, LMAPS != NULL == direct imports,
14226 == NUL == indirect imports. */
14227
14228 unsigned
14229 module_state::read_imports (bytes_in &sec, cpp_reader *reader, line_maps *lmaps)
14230 {
14231 unsigned count = sec.u ();
14232 unsigned loaded = 0;
14233
14234 while (count--)
14235 {
14236 unsigned ix = sec.u ();
14237 if (ix >= slurp->remap->length () || !ix || (*slurp->remap)[ix])
14238 {
14239 sec.set_overrun ();
14240 break;
14241 }
14242
14243 const char *name = sec.str (NULL);
14244 module_state *imp = get_module (name);
14245 unsigned crc = sec.u32 ();
14246 int exportedness = 0;
14247
14248 /* If the import is a partition, it must be the same primary
14249 module as this TU. */
14250 if (imp && imp->is_partition () &&
14251 (!named_module_p ()
14252 || (get_primary ((*modules)[0]) != get_primary (imp))))
14253 imp = NULL;
14254
14255 if (!imp)
14256 sec.set_overrun ();
14257 if (sec.get_overrun ())
14258 break;
14259
14260 if (lmaps)
14261 {
14262 /* A direct import, maybe load it. */
14263 location_t floc = read_location (sec);
14264 const char *fname = sec.str (NULL);
14265 exportedness = sec.i ();
14266
14267 if (sec.get_overrun ())
14268 break;
14269
14270 if (!imp->check_not_purview (loc))
14271 continue;
14272
14273 if (imp->loadedness == ML_NONE)
14274 {
14275 imp->loc = floc;
14276 imp->crc = crc;
14277 if (!imp->get_flatname ())
14278 imp->set_flatname ();
14279
14280 unsigned n = dump.push (imp);
14281
14282 if (!imp->filename && fname)
14283 imp->filename = xstrdup (fname);
14284
14285 if (imp->is_partition ())
14286 dump () && dump ("Importing elided partition %M", imp);
14287
14288 if (!imp->do_import (reader, false))
14289 imp = NULL;
14290 dump.pop (n);
14291 if (!imp)
14292 continue;
14293 }
14294
14295 if (is_partition ())
14296 {
14297 if (!imp->is_direct ())
14298 imp->directness = MD_PARTITION_DIRECT;
14299 if (exportedness > 0)
14300 imp->exported_p = true;
14301 }
14302 }
14303 else
14304 {
14305 /* An indirect import, find it, it should already be here. */
14306 if (imp->loadedness == ML_NONE)
14307 {
14308 error_at (loc, "indirect import %qs is not already loaded", name);
14309 continue;
14310 }
14311 }
14312
14313 if (imp->crc != crc)
14314 error_at (loc, "import %qs has CRC mismatch", imp->get_flatname ());
14315
14316 (*slurp->remap)[ix] = (imp->mod << 1) | (lmaps != NULL);
14317
14318 if (lmaps && exportedness >= 0)
14319 set_import (imp, bool (exportedness));
14320 dump () && dump ("Found %simport:%u %M->%u", !lmaps ? "indirect "
14321 : exportedness > 0 ? "exported "
14322 : exportedness < 0 ? "gmf" : "", ix, imp,
14323 imp->mod);
14324 loaded++;
14325 }
14326
14327 return loaded;
14328 }
14329
14330 /* Write the import table to MOD_SNAME_PFX.imp. */
14331
14332 void
14333 module_state::write_imports (elf_out *to, unsigned *crc_ptr)
14334 {
14335 dump () && dump ("Writing imports");
14336 dump.indent ();
14337
14338 bytes_out sec (to);
14339 sec.begin ();
14340
14341 write_imports (sec, true);
14342 write_imports (sec, false);
14343
14344 sec.end (to, to->name (MOD_SNAME_PFX ".imp"), crc_ptr);
14345 dump.outdent ();
14346 }
14347
14348 bool
14349 module_state::read_imports (cpp_reader *reader, line_maps *lmaps)
14350 {
14351 bytes_in sec;
14352
14353 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".imp"))
14354 return false;
14355
14356 dump () && dump ("Reading %u imports", slurp->remap->length () - 1);
14357 dump.indent ();
14358
14359 /* Read the imports. */
14360 unsigned direct = read_imports (sec, reader, lmaps);
14361 unsigned indirect = read_imports (sec, NULL, NULL);
14362 if (direct + indirect + 1 != slurp->remap->length ())
14363 from ()->set_error (elf::E_BAD_IMPORT);
14364
14365 dump.outdent ();
14366 if (!sec.end (from ()))
14367 return false;
14368 return true;
14369 }
14370
14371 /* We're the primary module interface, but have partitions. Document
14372 them so that non-partition module implementation units know which
14373 have already been loaded. */
14374
14375 void
14376 module_state::write_partitions (elf_out *to, unsigned count, unsigned *crc_ptr)
14377 {
14378 dump () && dump ("Writing %u elided partitions", count);
14379 dump.indent ();
14380
14381 bytes_out sec (to);
14382 sec.begin ();
14383
14384 for (unsigned ix = 1; ix != modules->length (); ix++)
14385 {
14386 module_state *imp = (*modules)[ix];
14387 if (imp->is_partition ())
14388 {
14389 dump () && dump ("Writing elided partition %M (crc=%x)",
14390 imp, imp->crc);
14391 sec.str (imp->get_flatname ());
14392 sec.u32 (imp->crc);
14393 write_location (sec, imp->is_direct ()
14394 ? imp->imported_from () : UNKNOWN_LOCATION);
14395 sec.str (imp->filename);
14396 }
14397 }
14398
14399 sec.end (to, to->name (MOD_SNAME_PFX ".prt"), crc_ptr);
14400 dump.outdent ();
14401 }
14402
14403 bool
14404 module_state::read_partitions (unsigned count)
14405 {
14406 bytes_in sec;
14407 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".prt"))
14408 return false;
14409
14410 dump () && dump ("Reading %u elided partitions", count);
14411 dump.indent ();
14412
14413 while (count--)
14414 {
14415 const char *name = sec.str (NULL);
14416 unsigned crc = sec.u32 ();
14417 location_t floc = read_location (sec);
14418 const char *fname = sec.str (NULL);
14419
14420 if (sec.get_overrun ())
14421 break;
14422
14423 dump () && dump ("Reading elided partition %s (crc=%x)", name, crc);
14424
14425 module_state *imp = get_module (name);
14426 if (!imp || !imp->is_partition () || imp->is_rooted ()
14427 || get_primary (imp) != this)
14428 {
14429 sec.set_overrun ();
14430 break;
14431 }
14432
14433 /* Attach the partition without loading it. We'll have to load
14434 for real if it's indirectly imported. */
14435 imp->loc = floc;
14436 imp->crc = crc;
14437 if (!imp->filename && fname[0])
14438 imp->filename = xstrdup (fname);
14439 }
14440
14441 dump.outdent ();
14442 if (!sec.end (from ()))
14443 return false;
14444 return true;
14445 }
14446
14447 /* Counter indices. */
14448 enum module_state_counts
14449 {
14450 MSC_sec_lwm,
14451 MSC_sec_hwm,
14452 MSC_pendings,
14453 MSC_entities,
14454 MSC_namespaces,
14455 MSC_bindings,
14456 MSC_macros,
14457 MSC_inits,
14458 MSC_HWM
14459 };
14460
14461 /* Data for config reading and writing. */
14462 struct module_state_config {
14463 const char *dialect_str;
14464 unsigned num_imports;
14465 unsigned num_partitions;
14466 unsigned ordinary_locs;
14467 unsigned macro_locs;
14468 unsigned ordinary_loc_align;
14469
14470 public:
14471 module_state_config ()
14472 :dialect_str (get_dialect ()),
14473 num_imports (0), num_partitions (0),
14474 ordinary_locs (0), macro_locs (0), ordinary_loc_align (0)
14475 {
14476 }
14477
14478 static void release ()
14479 {
14480 XDELETEVEC (dialect);
14481 dialect = NULL;
14482 }
14483
14484 private:
14485 static const char *get_dialect ();
14486 static char *dialect;
14487 };
14488
14489 char *module_state_config::dialect;
14490
14491 /* Generate a string of the significant compilation options.
14492 Generally assume the user knows what they're doing, in the same way
14493 that object files can be mixed. */
14494
14495 const char *
14496 module_state_config::get_dialect ()
14497 {
14498 if (!dialect)
14499 dialect = concat (get_cxx_dialect_name (cxx_dialect),
14500 /* C++ implies these, only show if disabled. */
14501 flag_exceptions ? "" : "/no-exceptions",
14502 flag_rtti ? "" : "/no-rtti",
14503 flag_new_inheriting_ctors ? "" : "/old-inheriting-ctors",
14504 /* C++ 20 implies concepts. */
14505 cxx_dialect < cxx20 && flag_concepts ? "/concepts" : "",
14506 flag_coroutines ? "/coroutines" : "",
14507 flag_module_implicit_inline ? "/implicit-inline" : "",
14508 NULL);
14509
14510 return dialect;
14511 }
14512
14513 /* Contents of a cluster. */
14514 enum cluster_tag {
14515 ct_decl, /* A decl. */
14516 ct_defn, /* A definition. */
14517 ct_bind, /* A binding. */
14518 ct_hwm
14519 };
14520
14521 /* Binding modifiers. */
14522 enum ct_bind_flags
14523 {
14524 cbf_export = 0x1, /* An exported decl. */
14525 cbf_hidden = 0x2, /* A hidden (friend) decl. */
14526 cbf_using = 0x4, /* A using decl. */
14527 cbf_wrapped = 0x8, /* ... that is wrapped. */
14528 };
14529
14530 /* Write the cluster of depsets in SCC[0-SIZE). */
14531
14532 unsigned
14533 module_state::write_cluster (elf_out *to, depset *scc[], unsigned size,
14534 depset::hash &table, unsigned *counts,
14535 unsigned *crc_ptr)
14536 {
14537 dump () && dump ("Writing section:%u %u depsets", table.section, size);
14538 dump.indent ();
14539
14540 trees_out sec (to, this, table, table.section);
14541 sec.begin ();
14542
14543 /* Determine entity numbers, mark for writing. */
14544 dump (dumper::CLUSTER) && dump ("Cluster members:") && (dump.indent (), true);
14545 for (unsigned ix = 0; ix != size; ix++)
14546 {
14547 depset *b = scc[ix];
14548
14549 switch (b->get_entity_kind ())
14550 {
14551 default:
14552 gcc_unreachable ();
14553
14554 case depset::EK_BINDING:
14555 dump (dumper::CLUSTER)
14556 && dump ("[%u]=%s %P", ix, b->entity_kind_name (),
14557 b->get_entity (), b->get_name ());
14558 for (unsigned jx = b->deps.length (); jx--;)
14559 {
14560 depset *dep = b->deps[jx];
14561 if (jx)
14562 gcc_checking_assert (dep->get_entity_kind () == depset::EK_USING
14563 || TREE_VISITED (dep->get_entity ()));
14564 else
14565 gcc_checking_assert (dep->get_entity_kind ()
14566 == depset::EK_NAMESPACE
14567 && dep->get_entity () == b->get_entity ());
14568 }
14569 break;
14570
14571 case depset::EK_DECL:
14572 if (b->is_member ())
14573 {
14574 case depset::EK_SPECIALIZATION: /* Yowzer! */
14575 case depset::EK_PARTIAL: /* Hey, let's do it again! */
14576 counts[MSC_pendings]++;
14577 }
14578 b->cluster = counts[MSC_entities]++;
14579 sec.mark_declaration (b->get_entity (), b->has_defn ());
14580 /* FALLTHROUGH */
14581
14582 case depset::EK_USING:
14583 gcc_checking_assert (!b->is_import ()
14584 && !b->is_unreached ());
14585 dump (dumper::CLUSTER)
14586 && dump ("[%u]=%s %s %N", ix, b->entity_kind_name (),
14587 b->has_defn () ? "definition" : "declaration",
14588 b->get_entity ());
14589 break;
14590 }
14591 }
14592 dump (dumper::CLUSTER) && (dump.outdent (), true);
14593
14594 /* Ensure every imported decl is referenced before we start
14595 streaming. This ensures that we never encounter the
14596 situation where this cluster instantiates some implicit
14597 member that importing some other decl causes to be
14598 instantiated. */
14599 sec.set_importing (+1);
14600 for (unsigned ix = 0; ix != size; ix++)
14601 {
14602 depset *b = scc[ix];
14603 for (unsigned jx = (b->get_entity_kind () == depset::EK_BINDING
14604 || b->is_special ()) ? 1 : 0;
14605 jx != b->deps.length (); jx++)
14606 {
14607 depset *dep = b->deps[jx];
14608
14609 if (!dep->is_binding ()
14610 && dep->is_import () && !TREE_VISITED (dep->get_entity ()))
14611 {
14612 tree import = dep->get_entity ();
14613
14614 sec.tree_node (import);
14615 dump (dumper::CLUSTER) && dump ("Seeded import %N", import);
14616 }
14617 }
14618 }
14619 sec.tree_node (NULL_TREE);
14620 /* We're done importing now. */
14621 sec.set_importing (-1);
14622
14623 /* Write non-definitions. */
14624 for (unsigned ix = 0; ix != size; ix++)
14625 {
14626 depset *b = scc[ix];
14627 tree decl = b->get_entity ();
14628 switch (b->get_entity_kind ())
14629 {
14630 default:
14631 gcc_unreachable ();
14632 break;
14633
14634 case depset::EK_BINDING:
14635 {
14636 gcc_assert (TREE_CODE (decl) == NAMESPACE_DECL);
14637 dump () && dump ("Depset:%u binding %C:%P", ix, TREE_CODE (decl),
14638 decl, b->get_name ());
14639 sec.u (ct_bind);
14640 sec.tree_node (decl);
14641 sec.tree_node (b->get_name ());
14642
14643 /* Write in reverse order, so reading will see the exports
14644 first, thus building the overload chain will be
14645 optimized. */
14646 for (unsigned jx = b->deps.length (); --jx;)
14647 {
14648 depset *dep = b->deps[jx];
14649 tree bound = dep->get_entity ();
14650 unsigned flags = 0;
14651 if (dep->get_entity_kind () == depset::EK_USING)
14652 {
14653 tree ovl = bound;
14654 bound = OVL_FUNCTION (bound);
14655 if (!(TREE_CODE (bound) == CONST_DECL
14656 && UNSCOPED_ENUM_P (TREE_TYPE (bound))
14657 && decl == TYPE_NAME (TREE_TYPE (bound))))
14658 {
14659 /* An unscope enumerator in its enumeration's
14660 scope is not a using. */
14661 flags |= cbf_using;
14662 if (OVL_USING_P (ovl))
14663 flags |= cbf_wrapped;
14664 }
14665 if (OVL_EXPORT_P (ovl))
14666 flags |= cbf_export;
14667 }
14668 else
14669 {
14670 /* An implicit typedef must be at one. */
14671 gcc_assert (!DECL_IMPLICIT_TYPEDEF_P (bound) || jx == 1);
14672 if (dep->is_hidden ())
14673 flags |= cbf_hidden;
14674 else if (DECL_MODULE_EXPORT_P (STRIP_TEMPLATE (bound)))
14675 flags |= cbf_export;
14676 }
14677
14678 gcc_checking_assert (DECL_P (bound));
14679
14680 sec.i (flags);
14681 sec.tree_node (bound);
14682 }
14683
14684 /* Terminate the list. */
14685 sec.i (-1);
14686 }
14687 break;
14688
14689 case depset::EK_USING:
14690 dump () && dump ("Depset:%u %s %C:%N", ix, b->entity_kind_name (),
14691 TREE_CODE (decl), decl);
14692 break;
14693
14694 case depset::EK_SPECIALIZATION:
14695 case depset::EK_PARTIAL:
14696 case depset::EK_DECL:
14697 dump () && dump ("Depset:%u %s entity:%u %C:%N", ix,
14698 b->entity_kind_name (), b->cluster,
14699 TREE_CODE (decl), decl);
14700
14701 sec.u (ct_decl);
14702 sec.tree_node (decl);
14703
14704 dump () && dump ("Wrote declaration entity:%u %C:%N",
14705 b->cluster, TREE_CODE (decl), decl);
14706 break;
14707 }
14708 }
14709
14710 depset *namer = NULL;
14711
14712 /* Write out definitions */
14713 for (unsigned ix = 0; ix != size; ix++)
14714 {
14715 depset *b = scc[ix];
14716 tree decl = b->get_entity ();
14717 switch (b->get_entity_kind ())
14718 {
14719 default:
14720 break;
14721
14722 case depset::EK_SPECIALIZATION:
14723 case depset::EK_PARTIAL:
14724 case depset::EK_DECL:
14725 if (!namer)
14726 namer = b;
14727
14728 if (b->has_defn ())
14729 {
14730 sec.u (ct_defn);
14731 sec.tree_node (decl);
14732 dump () && dump ("Writing definition %N", decl);
14733 sec.write_definition (decl);
14734
14735 if (!namer->has_defn ())
14736 namer = b;
14737 }
14738 break;
14739 }
14740 }
14741
14742 /* We don't find the section by name. Use depset's decl's name for
14743 human friendliness. */
14744 unsigned name = 0;
14745 tree naming_decl = NULL_TREE;
14746 if (namer)
14747 {
14748 naming_decl = namer->get_entity ();
14749 if (namer->get_entity_kind () == depset::EK_USING)
14750 /* This unfortunately names the section from the target of the
14751 using decl. But the name is only a guide, so Do Not Care. */
14752 naming_decl = OVL_FUNCTION (naming_decl);
14753 if (DECL_IMPLICIT_TYPEDEF_P (naming_decl))
14754 /* Lose any anonymousness. */
14755 naming_decl = TYPE_NAME (TREE_TYPE (naming_decl));
14756 name = to->qualified_name (naming_decl, namer->has_defn ());
14757 }
14758
14759 unsigned bytes = sec.pos;
14760 unsigned snum = sec.end (to, name, crc_ptr);
14761
14762 for (unsigned ix = size; ix--;)
14763 gcc_checking_assert (scc[ix]->section == snum);
14764
14765 dump.outdent ();
14766 dump () && dump ("Wrote section:%u named-by:%N", table.section, naming_decl);
14767
14768 return bytes;
14769 }
14770
14771 /* Read a cluster from section SNUM. */
14772
14773 bool
14774 module_state::read_cluster (unsigned snum)
14775 {
14776 trees_in sec (this);
14777
14778 if (!sec.begin (loc, from (), snum))
14779 return false;
14780
14781 dump () && dump ("Reading section:%u", snum);
14782 dump.indent ();
14783
14784 /* We care about structural equality. */
14785 comparing_specializations++;
14786
14787 /* First seed the imports. */
14788 while (tree import = sec.tree_node ())
14789 dump (dumper::CLUSTER) && dump ("Seeded import %N", import);
14790
14791 while (!sec.get_overrun () && sec.more_p ())
14792 {
14793 unsigned ct = sec.u ();
14794 switch (ct)
14795 {
14796 default:
14797 sec.set_overrun ();
14798 break;
14799
14800 case ct_bind:
14801 /* A set of namespace bindings. */
14802 {
14803 tree ns = sec.tree_node ();
14804 tree name = sec.tree_node ();
14805 tree decls = NULL_TREE;
14806 tree visible = NULL_TREE;
14807 tree type = NULL_TREE;
14808 bool dedup = false;
14809
14810 /* We rely on the bindings being in the reverse order of
14811 the resulting overload set. */
14812 for (;;)
14813 {
14814 int flags = sec.i ();
14815 if (flags < 0)
14816 break;
14817
14818 if ((flags & cbf_hidden)
14819 && (flags & (cbf_using | cbf_export)))
14820 sec.set_overrun ();
14821
14822 tree decl = sec.tree_node ();
14823 if (sec.get_overrun ())
14824 break;
14825
14826 if (decls && TREE_CODE (decl) == TYPE_DECL)
14827 {
14828 /* Stat hack. */
14829 if (type || !DECL_IMPLICIT_TYPEDEF_P (decl))
14830 sec.set_overrun ();
14831 type = decl;
14832 }
14833 else
14834 {
14835 if (decls
14836 || (flags & (cbf_hidden | cbf_wrapped))
14837 || DECL_FUNCTION_TEMPLATE_P (decl))
14838 {
14839 decls = ovl_make (decl, decls);
14840 if (flags & cbf_using)
14841 {
14842 dedup = true;
14843 OVL_USING_P (decls) = true;
14844 if (flags & cbf_export)
14845 OVL_EXPORT_P (decls) = true;
14846 }
14847
14848 if (flags & cbf_hidden)
14849 OVL_HIDDEN_P (decls) = true;
14850 else if (dedup)
14851 OVL_DEDUP_P (decls) = true;
14852 }
14853 else
14854 decls = decl;
14855
14856 if (flags & cbf_export
14857 || (!(flags & cbf_hidden)
14858 && (is_module () || is_partition ())))
14859 visible = decls;
14860 }
14861 }
14862
14863 if (!decls)
14864 sec.set_overrun ();
14865
14866 if (sec.get_overrun ())
14867 break; /* Bail. */
14868
14869 dump () && dump ("Binding of %P", ns, name);
14870 if (!set_module_binding (ns, name, mod,
14871 is_header () ? -1
14872 : is_module () || is_partition () ? 1
14873 : 0,
14874 decls, type, visible))
14875 sec.set_overrun ();
14876
14877 if (type
14878 && CP_DECL_CONTEXT (type) == ns
14879 && !sec.is_duplicate (type))
14880 add_module_decl (ns, name, type);
14881
14882 for (ovl_iterator iter (decls); iter; ++iter)
14883 if (!iter.using_p ())
14884 {
14885 tree decl = *iter;
14886 if (CP_DECL_CONTEXT (decl) == ns
14887 && !sec.is_duplicate (decl))
14888 add_module_decl (ns, name, decl);
14889 }
14890 }
14891 break;
14892
14893 case ct_decl:
14894 /* A decl. */
14895 {
14896 tree decl = sec.tree_node ();
14897 dump () && dump ("Read declaration of %N", decl);
14898 }
14899 break;
14900
14901 case ct_defn:
14902 {
14903 tree decl = sec.tree_node ();
14904 dump () && dump ("Reading definition of %N", decl);
14905 sec.read_definition (decl);
14906 }
14907 break;
14908 }
14909 }
14910
14911 /* When lazy loading is in effect, we can be in the middle of
14912 parsing or instantiating a function. Save it away.
14913 push_function_context does too much work. */
14914 tree old_cfd = current_function_decl;
14915 struct function *old_cfun = cfun;
14916 while (tree decl = sec.post_process ())
14917 {
14918 bool abstract = false;
14919 if (TREE_CODE (decl) == TEMPLATE_DECL)
14920 {
14921 abstract = true;
14922 decl = DECL_TEMPLATE_RESULT (decl);
14923 }
14924
14925 current_function_decl = decl;
14926 allocate_struct_function (decl, abstract);
14927 cfun->language = ggc_cleared_alloc<language_function> ();
14928 cfun->language->base.x_stmt_tree.stmts_are_full_exprs_p = 1;
14929
14930 if (abstract)
14931 ;
14932 else if (DECL_ABSTRACT_P (decl))
14933 {
14934 bool cloned = maybe_clone_body (decl);
14935 if (!cloned)
14936 from ()->set_error ();
14937 }
14938 else
14939 {
14940 bool aggr = aggregate_value_p (DECL_RESULT (decl), decl);
14941 #ifdef PCC_STATIC_STRUCT_RETURN
14942 cfun->returns_pcc_struct = aggr;
14943 #endif
14944 cfun->returns_struct = aggr;
14945
14946 if (DECL_COMDAT (decl))
14947 // FIXME: Comdat grouping?
14948 comdat_linkage (decl);
14949 note_vague_linkage_fn (decl);
14950 cgraph_node::finalize_function (decl, true);
14951 }
14952
14953 }
14954 /* Look, function.c's interface to cfun does too much for us, we
14955 just need to restore the old value. I do not want to go
14956 redesigning that API right now. */
14957 #undef cfun
14958 cfun = old_cfun;
14959 current_function_decl = old_cfd;
14960 comparing_specializations--;
14961
14962 dump.outdent ();
14963 dump () && dump ("Read section:%u", snum);
14964
14965 loaded_clusters++;
14966
14967 if (!sec.end (from ()))
14968 return false;
14969
14970 return true;
14971 }
14972
14973 void
14974 module_state::write_namespace (bytes_out &sec, depset *dep)
14975 {
14976 unsigned ns_num = dep->cluster;
14977 unsigned ns_import = 0;
14978
14979 if (dep->is_import ())
14980 ns_import = dep->section;
14981 else if (dep->get_entity () != global_namespace)
14982 ns_num++;
14983
14984 sec.u (ns_import);
14985 sec.u (ns_num);
14986 }
14987
14988 tree
14989 module_state::read_namespace (bytes_in &sec)
14990 {
14991 unsigned ns_import = sec.u ();
14992 unsigned ns_num = sec.u ();
14993 tree ns = NULL_TREE;
14994
14995 if (ns_import || ns_num)
14996 {
14997 if (!ns_import)
14998 ns_num--;
14999
15000 if (unsigned origin = slurp->remap_module (ns_import))
15001 {
15002 module_state *from = (*modules)[origin];
15003 if (ns_num < from->entity_num)
15004 {
15005 binding_slot &slot = (*entity_ary)[from->entity_lwm + ns_num];
15006
15007 if (!slot.is_lazy ())
15008 ns = slot;
15009 }
15010 }
15011 else
15012 sec.set_overrun ();
15013 }
15014 else
15015 ns = global_namespace;
15016
15017 return ns;
15018 }
15019
15020 /* SPACES is a sorted vector of namespaces. Write out the namespaces
15021 to MOD_SNAME_PFX.nms section. */
15022
15023 void
15024 module_state::write_namespaces (elf_out *to, vec<depset *> spaces,
15025 unsigned num, unsigned *crc_p)
15026 {
15027 dump () && dump ("Writing namespaces");
15028 dump.indent ();
15029
15030 bytes_out sec (to);
15031 sec.begin ();
15032
15033 for (unsigned ix = 0; ix != num; ix++)
15034 {
15035 depset *b = spaces[ix];
15036 tree ns = b->get_entity ();
15037
15038 gcc_checking_assert (TREE_CODE (ns) == NAMESPACE_DECL);
15039
15040 bool export_p = DECL_MODULE_EXPORT_P (ns);
15041 bool inline_p = DECL_NAMESPACE_INLINE_P (ns);
15042 bool public_p = TREE_PUBLIC (ns);
15043
15044 /* We should only be naming public namespaces, or our own
15045 private ones. Internal linkage ones never get to be written
15046 out -- because that means something erroneously referred to a
15047 member. However, Davis Herring's paper probably changes that
15048 by permitting them to be written out, but then an error if on
15049 touches them. (Certain cases cannot be detected until that
15050 point.) */
15051 gcc_checking_assert (public_p || !DECL_MODULE_IMPORT_P (ns));
15052 unsigned flags = 0;
15053 if (export_p)
15054 flags |= 1;
15055 if (inline_p)
15056 flags |= 2;
15057 if (public_p)
15058 flags |= 4;
15059 dump () && dump ("Writing namespace:%u %N%s%s%s",
15060 b->cluster, ns, export_p ? ", export" : "",
15061 public_p ? ", public" : "",
15062 inline_p ? ", inline" : "");
15063 sec.u (b->cluster);
15064 sec.u (to->name (DECL_NAME (ns)));
15065 write_namespace (sec, b->deps[0]);
15066
15067 /* Don't use bools, because this can be near the end of the
15068 section, and it won't save anything anyway. */
15069 sec.u (flags);
15070 write_location (sec, DECL_SOURCE_LOCATION (ns));
15071 }
15072
15073 sec.end (to, to->name (MOD_SNAME_PFX ".nms"), crc_p);
15074 dump.outdent ();
15075 }
15076
15077 /* Read the namespace hierarchy from MOD_SNAME_PFX.namespace. Fill in
15078 SPACES from that data. */
15079
15080 bool
15081 module_state::read_namespaces (unsigned num)
15082 {
15083 bytes_in sec;
15084
15085 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".nms"))
15086 return false;
15087
15088 dump () && dump ("Reading namespaces");
15089 dump.indent ();
15090
15091 for (unsigned ix = 0; ix != num; ix++)
15092 {
15093 unsigned entity_index = sec.u ();
15094 unsigned name = sec.u ();
15095
15096 tree parent = read_namespace (sec);
15097
15098 /* See comment in write_namespace about why not bits. */
15099 unsigned flags = sec.u ();
15100 location_t src_loc = read_location (sec);
15101
15102 if (entity_index >= entity_num || !parent)
15103 sec.set_overrun ();
15104 if (sec.get_overrun ())
15105 break;
15106
15107 tree id = name ? get_identifier (from ()->name (name)) : NULL_TREE;
15108 bool public_p = flags & 4;
15109 bool inline_p = flags & 2;
15110 bool export_p = flags & 1;
15111
15112 dump () && dump ("Read namespace:%u %P%s%s%s",
15113 entity_index, parent, id, export_p ? ", export" : "",
15114 public_p ? ", public" : "",
15115 inline_p ? ", inline" : "");
15116 bool visible_p = (export_p
15117 || (public_p && (is_partition () || is_module ())));
15118 tree inner = add_imported_namespace (parent, id, mod,
15119 src_loc, visible_p, inline_p);
15120 if (export_p && is_partition ())
15121 DECL_MODULE_EXPORT_P (inner) = true;
15122
15123 /* Install the namespace. */
15124 (*entity_ary)[entity_lwm + entity_index] = inner;
15125 if (DECL_MODULE_IMPORT_P (inner))
15126 {
15127 bool existed;
15128 unsigned *slot = &entity_map->get_or_insert
15129 (DECL_UID (inner), &existed);
15130 if (existed)
15131 /* If it existed, it should match. */
15132 gcc_checking_assert (inner == (*entity_ary)[*slot]);
15133 else
15134 *slot = entity_lwm + entity_index;
15135 }
15136 }
15137 dump.outdent ();
15138 if (!sec.end (from ()))
15139 return false;
15140 return true;
15141 }
15142
15143 /* Write the binding TABLE to MOD_SNAME_PFX.bnd */
15144
15145 unsigned
15146 module_state::write_bindings (elf_out *to, vec<depset *> sccs, unsigned *crc_p)
15147 {
15148 dump () && dump ("Writing binding table");
15149 dump.indent ();
15150
15151 unsigned num = 0;
15152 bytes_out sec (to);
15153 sec.begin ();
15154
15155 for (unsigned ix = 0; ix != sccs.length (); ix++)
15156 {
15157 depset *b = sccs[ix];
15158 if (b->is_binding ())
15159 {
15160 tree ns = b->get_entity ();
15161 dump () && dump ("Bindings %P section:%u", ns, b->get_name (),
15162 b->section);
15163 sec.u (to->name (b->get_name ()));
15164 write_namespace (sec, b->deps[0]);
15165 sec.u (b->section);
15166 num++;
15167 }
15168 }
15169
15170 sec.end (to, to->name (MOD_SNAME_PFX ".bnd"), crc_p);
15171 dump.outdent ();
15172
15173 return num;
15174 }
15175
15176 /* Read the binding table from MOD_SNAME_PFX.bind. */
15177
15178 bool
15179 module_state::read_bindings (unsigned num, unsigned lwm, unsigned hwm)
15180 {
15181 bytes_in sec;
15182
15183 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".bnd"))
15184 return false;
15185
15186 dump () && dump ("Reading binding table");
15187 dump.indent ();
15188 for (; !sec.get_overrun () && num--;)
15189 {
15190 const char *name = from ()->name (sec.u ());
15191 tree ns = read_namespace (sec);
15192 unsigned snum = sec.u ();
15193
15194 if (!ns || !name || (snum - lwm) >= (hwm - lwm))
15195 sec.set_overrun ();
15196 if (!sec.get_overrun ())
15197 {
15198 tree id = get_identifier (name);
15199 dump () && dump ("Bindings %P section:%u", ns, id, snum);
15200 if (mod && !import_module_binding (ns, id, mod, snum))
15201 break;
15202 }
15203 }
15204
15205 dump.outdent ();
15206 if (!sec.end (from ()))
15207 return false;
15208 return true;
15209 }
15210
15211 /* Write the entity table to MOD_SNAME_PFX.ent
15212
15213 Each entry is a section number. */
15214
15215 void
15216 module_state::write_entities (elf_out *to, vec<depset *> depsets,
15217 unsigned count, unsigned *crc_p)
15218 {
15219 dump () && dump ("Writing entities");
15220 dump.indent ();
15221
15222 bytes_out sec (to);
15223 sec.begin ();
15224
15225 unsigned current = 0;
15226 for (unsigned ix = 0; ix < depsets.length (); ix++)
15227 {
15228 depset *d = depsets[ix];
15229
15230 switch (d->get_entity_kind ())
15231 {
15232 default:
15233 break;
15234
15235 case depset::EK_NAMESPACE:
15236 if (!d->is_import () && d->get_entity () != global_namespace)
15237 {
15238 gcc_checking_assert (d->cluster == current);
15239 current++;
15240 sec.u (0);
15241 }
15242 break;
15243
15244 case depset::EK_DECL:
15245 case depset::EK_SPECIALIZATION:
15246 case depset::EK_PARTIAL:
15247 gcc_checking_assert (!d->is_unreached ()
15248 && !d->is_import ()
15249 && d->cluster == current
15250 && d->section);
15251 current++;
15252 sec.u (d->section);
15253 break;
15254 }
15255 }
15256 gcc_assert (count == current);
15257 sec.end (to, to->name (MOD_SNAME_PFX ".ent"), crc_p);
15258 dump.outdent ();
15259 }
15260
15261 bool
15262 module_state::read_entities (unsigned count, unsigned lwm, unsigned hwm)
15263 {
15264 trees_in sec (this);
15265
15266 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".ent"))
15267 return false;
15268
15269 dump () && dump ("Reading entities");
15270 dump.indent ();
15271
15272 vec_safe_reserve (entity_ary, count);
15273 unsigned ix;
15274 for (ix = 0; ix != count; ix++)
15275 {
15276 unsigned snum = sec.u ();
15277 if (snum && (snum - lwm) >= (hwm - lwm))
15278 sec.set_overrun ();
15279 if (sec.get_overrun ())
15280 break;
15281
15282 binding_slot slot;
15283 slot.u.binding = NULL_TREE;
15284 if (snum)
15285 slot.set_lazy (snum << 2);
15286 entity_ary->quick_push (slot);
15287 }
15288 entity_num = ix;
15289
15290 dump.outdent ();
15291 if (!sec.end (from ()))
15292 return false;
15293 return true;
15294 }
15295
15296 /* Write the pending table to MOD_SNAME_PFX.pnd
15297
15298 Specializations & partials are keyed to their primary template.
15299 Members are keyed to their context.
15300
15301 For specializations & partials, primary templates are keyed to the
15302 (namespace name) of their originating decl (because that's the only
15303 handle we have). */
15304
15305 void
15306 module_state::write_pendings (elf_out *to, vec<depset *> depsets,
15307 depset::hash &table,
15308 unsigned count, unsigned *crc_p)
15309 {
15310 dump () && dump ("Writing %u pendings", count);
15311 dump.indent ();
15312
15313 trees_out sec (to, this, table);
15314 sec.begin ();
15315
15316 for (unsigned ix = 0; ix < depsets.length (); ix++)
15317 {
15318 depset *d = depsets[ix];
15319 depset::entity_kind kind = d->get_entity_kind ();
15320 tree key = NULL_TREE;
15321 bool is_spec = false;
15322
15323
15324 if (kind == depset::EK_SPECIALIZATION)
15325 {
15326 is_spec = true;
15327 key = reinterpret_cast <spec_entry *> (d->deps[0])->tmpl;
15328 }
15329 else if (kind == depset::EK_PARTIAL)
15330 {
15331 is_spec = true;
15332 key = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (d->get_entity ()));
15333 }
15334 else if (kind == depset::EK_DECL && d->is_member ())
15335 {
15336 tree ctx = DECL_CONTEXT (d->get_entity ());
15337 key = TYPE_NAME (ctx);
15338 if (tree ti = CLASSTYPE_TEMPLATE_INFO (ctx))
15339 if (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == key)
15340 key = TI_TEMPLATE (ti);
15341 }
15342
15343 // FIXME:OPTIMIZATION More than likely when there is one pending
15344 // member, there will be others. All written in the same
15345 // section and keyed to the same class. We only need to record
15346 // one of them. The same is not true for specializations
15347
15348 if (key)
15349 {
15350 gcc_checking_assert (!d->is_import ());
15351
15352 {
15353 /* Key the entity to its key. */
15354 depset *key_dep = table.find_dependency (key);
15355 if (key_dep->get_entity_kind () == depset::EK_REDIRECT)
15356 key_dep = key_dep->deps[0];
15357 unsigned key_origin
15358 = key_dep->is_import () ? key_dep->section : 0;
15359 sec.u (key_origin);
15360 sec.u (key_dep->cluster);
15361 sec.u (d->cluster);
15362 dump () && dump ("%s %N entity:%u keyed to %M[%u] %N",
15363 is_spec ? "Specialization" : "Member",
15364 d->get_entity (),
15365 d->cluster, (*modules)[key_origin],
15366 key_dep->cluster, key);
15367 }
15368
15369 if (is_spec)
15370 {
15371 /* Key the general template to the originating decl. */
15372 tree origin = get_originating_module_decl (key);
15373 sec.tree_node (CP_DECL_CONTEXT (origin));
15374 sec.tree_node (DECL_NAME (origin));
15375
15376 unsigned origin_ident = import_entity_index (origin);
15377 module_state *origin_from = this;
15378 if (!(origin_ident & ~(~0u>>1)))
15379 origin_from = import_entity_module (origin_ident);
15380 sec.u (origin_from->remap);
15381 }
15382 else
15383 sec.tree_node (NULL);
15384 count--;
15385 }
15386 }
15387 gcc_assert (!count);
15388 sec.end (to, to->name (MOD_SNAME_PFX ".pnd"), crc_p);
15389 dump.outdent ();
15390 }
15391
15392 bool
15393 module_state::read_pendings (unsigned count)
15394 {
15395 trees_in sec (this);
15396
15397 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".pnd"))
15398 return false;
15399
15400 dump () && dump ("Reading %u pendings", count);
15401 dump.indent ();
15402
15403 for (unsigned ix = 0; ix != count; ix++)
15404 {
15405 unsigned key_origin = slurp->remap_module (sec.u ());
15406 unsigned key_index = sec.u ();
15407 unsigned ent_index = sec.u ();
15408 module_state *from = (*modules)[key_origin];
15409 tree ns = sec.tree_node ();
15410
15411 if (!key_origin
15412 || key_index >= from->entity_num || ent_index >= entity_num
15413 || (ns && TREE_CODE (ns) != NAMESPACE_DECL))
15414 sec.set_overrun ();
15415
15416 if (sec.get_overrun ())
15417 break;
15418
15419 bool loaded = false;
15420 dump () && dump ("%s keyed to %M[%u] entity:%u",
15421 ns ? "Specialization" : "Member",
15422 from, key_index, ent_index);
15423 unsigned key_ident = from->entity_lwm + key_index;
15424 if (pending_table->add (ns ? key_ident : ~key_ident,
15425 ent_index + entity_lwm))
15426 {
15427 binding_slot &slot = (*entity_ary)[key_ident];
15428 if (slot.is_lazy ())
15429 slot.or_lazy (ns ? 1 : 2);
15430 else
15431 {
15432 tree key = slot;
15433
15434 loaded = true;
15435 if (ns)
15436 {
15437 if (key && TREE_CODE (key) == TEMPLATE_DECL)
15438 DECL_MODULE_PENDING_SPECIALIZATIONS_P (key) = true;
15439 else
15440 sec.set_overrun ();
15441 }
15442 else
15443 {
15444 if (key && TREE_CODE (key) == TYPE_DECL)
15445 DECL_MODULE_PENDING_MEMBERS_P (key) = true;
15446 else
15447 sec.set_overrun ();
15448 }
15449 }
15450 }
15451
15452 if (ns)
15453 {
15454 /* We also need to mark the namespace binding of the
15455 originating template, so we know to set its pending
15456 specializations flag, when we load it. */
15457 tree name = sec.tree_node ();
15458 unsigned origin = slurp->remap_module (sec.u ());
15459 if (!origin || !name || TREE_CODE (name) != IDENTIFIER_NODE)
15460 sec.set_overrun ();
15461 if (sec.get_overrun ())
15462 break;
15463
15464 module_state *origin_from = (*modules)[origin];
15465 if (!loaded
15466 && (origin_from->is_header ()
15467 || (origin_from->is_partition ()
15468 || origin_from->is_module ())))
15469 note_pending_specializations (ns, name, origin_from->is_header ());
15470 }
15471 }
15472
15473 dump.outdent ();
15474 if (!sec.end (from ()))
15475 return false;
15476 return true;
15477 }
15478
15479 /* Return true if module MOD cares about lazy specializations keyed to
15480 possibly duplicated entity bindings. */
15481
15482 bool
15483 lazy_specializations_p (unsigned mod, bool header_p, bool partition_p)
15484 {
15485 module_state *module = (*modules)[mod];
15486
15487 if (module->is_header ())
15488 return header_p;
15489
15490 if (module->is_module () || module->is_partition ())
15491 return partition_p;
15492
15493 return false;
15494 }
15495
15496 /* Read & write locations. */
15497 enum loc_kind {
15498 LK_ORDINARY,
15499 LK_MACRO,
15500 LK_IMPORT_ORDINARY,
15501 LK_IMPORT_MACRO,
15502 LK_ADHOC,
15503 LK_RESERVED,
15504 };
15505
15506 static const module_state *
15507 module_for_ordinary_loc (location_t loc)
15508 {
15509 unsigned pos = 1;
15510 unsigned len = modules->length () - pos;
15511
15512 while (len)
15513 {
15514 unsigned half = len / 2;
15515 module_state *probe = (*modules)[pos + half];
15516 if (loc < probe->ordinary_locs.first)
15517 len = half;
15518 else if (loc < probe->ordinary_locs.second)
15519 return probe;
15520 else
15521 {
15522 pos += half + 1;
15523 len = len - (half + 1);
15524 }
15525 }
15526
15527 return NULL;
15528 }
15529
15530 static const module_state *
15531 module_for_macro_loc (location_t loc)
15532 {
15533 unsigned pos = 1;
15534 unsigned len = modules->length () - pos;
15535
15536 while (len)
15537 {
15538 unsigned half = len / 2;
15539 module_state *probe = (*modules)[pos + half];
15540 if (loc >= probe->macro_locs.second)
15541 len = half;
15542 else if (loc >= probe->macro_locs.first)
15543 return probe;
15544 else
15545 {
15546 pos += half + 1;
15547 len = len - (half + 1);
15548 }
15549 }
15550
15551 return NULL;
15552 }
15553
15554 location_t
15555 module_state::imported_from () const
15556 {
15557 location_t from = loc;
15558 line_map_ordinary const *fmap
15559 = linemap_check_ordinary (linemap_lookup (line_table, from));
15560
15561 if (MAP_MODULE_P (fmap))
15562 from = linemap_included_from (fmap);
15563
15564 return from;
15565 }
15566
15567 /* If we're not streaming, record that we need location LOC.
15568 Otherwise stream it. */
15569
15570 void
15571 module_state::write_location (bytes_out &sec, location_t loc)
15572 {
15573 if (!sec.streaming_p ())
15574 /* This is where we should note we use this location. See comment
15575 about write_ordinary_maps. */
15576 return;
15577
15578 if (loc < RESERVED_LOCATION_COUNT)
15579 {
15580 dump (dumper::LOCATION) && dump ("Reserved location %u", unsigned (loc));
15581 sec.u (LK_RESERVED + loc);
15582 }
15583 else if (IS_ADHOC_LOC (loc))
15584 {
15585 dump (dumper::LOCATION) && dump ("Adhoc location");
15586 sec.u (LK_ADHOC);
15587 location_t locus = get_location_from_adhoc_loc (line_table, loc);
15588 write_location (sec, locus);
15589 source_range range = get_range_from_loc (line_table, loc);
15590 if (range.m_start == locus)
15591 /* Compress. */
15592 range.m_start = UNKNOWN_LOCATION;
15593 write_location (sec, range.m_start);
15594 write_location (sec, range.m_finish);
15595 }
15596 else if (IS_MACRO_LOC (loc))
15597 {
15598 if (const loc_spans::span *span = spans.macro (loc))
15599 {
15600 unsigned off = MAX_LOCATION_T - loc;
15601
15602 off -= span->macro_delta;
15603
15604 sec.u (LK_MACRO);
15605 sec.u (off);
15606 dump (dumper::LOCATION)
15607 && dump ("Macro location %u output %u", loc, off);
15608 }
15609 else if (const module_state *import = module_for_macro_loc (loc))
15610 {
15611 unsigned off = import->macro_locs.second - loc - 1;
15612 sec.u (LK_IMPORT_MACRO);
15613 sec.u (import->remap);
15614 sec.u (off);
15615 dump (dumper::LOCATION)
15616 && dump ("Imported macro location %u output %u:%u",
15617 loc, import->remap, off);
15618 }
15619 else
15620 gcc_unreachable ();
15621 }
15622 else if (IS_ORDINARY_LOC (loc))
15623 {
15624 if (const loc_spans::span *span = spans.ordinary (loc))
15625 {
15626 unsigned off = loc;
15627
15628 off += span->ordinary_delta;
15629 sec.u (LK_ORDINARY);
15630 sec.u (off);
15631
15632 dump (dumper::LOCATION)
15633 && dump ("Ordinary location %u output %u", loc, off);
15634 }
15635 else if (const module_state *import = module_for_ordinary_loc (loc))
15636 {
15637 unsigned off = loc - import->ordinary_locs.first;
15638 sec.u (LK_IMPORT_ORDINARY);
15639 sec.u (import->remap);
15640 sec.u (off);
15641 dump (dumper::LOCATION)
15642 && dump ("Imported ordinary location %u output %u:%u",
15643 import->remap, import->remap, off);
15644 }
15645 else
15646 gcc_unreachable ();
15647 }
15648 else
15649 gcc_unreachable ();
15650 }
15651
15652 location_t
15653 module_state::read_location (bytes_in &sec) const
15654 {
15655 location_t locus = UNKNOWN_LOCATION;
15656 unsigned kind = sec.u ();
15657 switch (kind)
15658 {
15659 default:
15660 {
15661 if (kind < LK_RESERVED + RESERVED_LOCATION_COUNT)
15662 locus = location_t (kind - LK_RESERVED);
15663 else
15664 sec.set_overrun ();
15665 dump (dumper::LOCATION)
15666 && dump ("Reserved location %u", unsigned (locus));
15667 }
15668 break;
15669
15670 case LK_ADHOC:
15671 {
15672 dump (dumper::LOCATION) && dump ("Adhoc location");
15673 locus = read_location (sec);
15674 source_range range;
15675 range.m_start = read_location (sec);
15676 if (range.m_start == UNKNOWN_LOCATION)
15677 range.m_start = locus;
15678 range.m_finish = read_location (sec);
15679 if (locus != loc && range.m_start != loc && range.m_finish != loc)
15680 locus = get_combined_adhoc_loc (line_table, locus, range, NULL);
15681 }
15682 break;
15683
15684 case LK_MACRO:
15685 {
15686 unsigned off = sec.u ();
15687
15688 if (macro_locs.first)
15689 {
15690 location_t adjusted = MAX_LOCATION_T - off;
15691 adjusted -= slurp->loc_deltas.second;
15692 if (adjusted < macro_locs.first)
15693 sec.set_overrun ();
15694 else if (adjusted < macro_locs.second)
15695 locus = adjusted;
15696 else
15697 sec.set_overrun ();
15698 }
15699 else
15700 locus = loc;
15701 dump (dumper::LOCATION)
15702 && dump ("Macro %u becoming %u", off, locus);
15703 }
15704 break;
15705
15706 case LK_ORDINARY:
15707 {
15708 unsigned off = sec.u ();
15709 if (ordinary_locs.second)
15710 {
15711 location_t adjusted = off;
15712
15713 adjusted += slurp->loc_deltas.first;
15714 if (adjusted >= ordinary_locs.second)
15715 sec.set_overrun ();
15716 else if (adjusted >= ordinary_locs.first)
15717 locus = adjusted;
15718 else if (adjusted < spans.main_start ())
15719 locus = off;
15720 }
15721 else
15722 locus = loc;
15723
15724 dump (dumper::LOCATION)
15725 && dump ("Ordinary location %u becoming %u", off, locus);
15726 }
15727 break;
15728
15729 case LK_IMPORT_MACRO:
15730 case LK_IMPORT_ORDINARY:
15731 {
15732 unsigned mod = sec.u ();
15733 unsigned off = sec.u ();
15734 const module_state *import = NULL;
15735
15736 if (!mod && !slurp->remap)
15737 /* This is an early read of a partition location during the
15738 read of our ordinary location map. */
15739 import = this;
15740 else
15741 {
15742 mod = slurp->remap_module (mod);
15743 if (!mod)
15744 sec.set_overrun ();
15745 else
15746 import = (*modules)[mod];
15747 }
15748
15749 if (import)
15750 {
15751 if (kind == LK_IMPORT_MACRO)
15752 {
15753 if (!import->macro_locs.first)
15754 locus = import->loc;
15755 else if (off < import->macro_locs.second - macro_locs.first)
15756 locus = import->macro_locs.second - off - 1;
15757 else
15758 sec.set_overrun ();
15759 }
15760 else
15761 {
15762 if (!import->ordinary_locs.second)
15763 locus = import->loc;
15764 else if (off < (import->ordinary_locs.second
15765 - import->ordinary_locs.first))
15766 locus = import->ordinary_locs.first + off;
15767 else
15768 sec.set_overrun ();
15769 }
15770 }
15771 }
15772 break;
15773 }
15774
15775 return locus;
15776 }
15777
15778 /* Prepare the span adjustments. */
15779
15780 // FIXME:QOI I do not prune the unreachable locations. Modules with
15781 // textually-large GMFs could well cause us to run out of locations.
15782 // Regular single-file modules could also be affected. We should
15783 // determine which locations we need to represent, so that we do not
15784 // grab more locations than necessary. An example is in
15785 // write_macro_maps where we work around macro expansions that are not
15786 // covering any locations -- the macro expands to nothing. Perhaps we
15787 // should decompose locations so that we can have a more graceful
15788 // degradation upon running out?
15789
15790 location_map_info
15791 module_state::write_prepare_maps (module_state_config *)
15792 {
15793 dump () && dump ("Preparing locations");
15794 dump.indent ();
15795
15796 dump () && dump ("Reserved locations [%u,%u) macro [%u,%u)",
15797 spans[loc_spans::SPAN_RESERVED].ordinary.first,
15798 spans[loc_spans::SPAN_RESERVED].ordinary.second,
15799 spans[loc_spans::SPAN_RESERVED].macro.first,
15800 spans[loc_spans::SPAN_RESERVED].macro.second);
15801
15802 location_map_info info;
15803
15804 info.num_maps.first = info.num_maps.second = 0;
15805
15806 /* Figure the alignment of ordinary location spans. */
15807 unsigned max_range = 0;
15808 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
15809 {
15810 loc_spans::span &span = spans[ix];
15811 line_map_ordinary const *omap
15812 = linemap_check_ordinary (linemap_lookup (line_table,
15813 span.ordinary.first));
15814
15815 /* We should exactly match up. */
15816 gcc_checking_assert (MAP_START_LOCATION (omap) == span.ordinary.first);
15817
15818 line_map_ordinary const *fmap = omap;
15819 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
15820 {
15821 /* We should never find a module linemap in an interval. */
15822 gcc_checking_assert (!MAP_MODULE_P (omap));
15823
15824 if (max_range < omap->m_range_bits)
15825 max_range = omap->m_range_bits;
15826 }
15827
15828 unsigned count = omap - fmap;
15829 info.num_maps.first += count;
15830
15831 if (span.macro.first != span.macro.second)
15832 {
15833 count = linemap_lookup_macro_index (line_table, span.macro.first) + 1;
15834 count -= linemap_lookup_macro_index (line_table,
15835 span.macro.second - 1);
15836 dump (dumper::LOCATION) && dump ("Span:%u %u macro maps", ix, count);
15837 info.num_maps.second += count;
15838 }
15839 }
15840
15841 /* Adjust the maps. Ordinary ones ascend, and we must maintain
15842 alignment. Macro ones descend, but are unaligned. */
15843 location_t ord_off = spans[loc_spans::SPAN_FIRST].ordinary.first;
15844 location_t mac_off = spans[loc_spans::SPAN_FIRST].macro.second;
15845 location_t range_mask = (1u << max_range) - 1;
15846
15847 dump () && dump ("Ordinary maps range bits:%u, preserve:%x, zero:%u",
15848 max_range, ord_off & range_mask, ord_off & ~range_mask);
15849
15850 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
15851 {
15852 loc_spans::span &span = spans[ix];
15853
15854 span.macro_delta = mac_off - span.macro.second;
15855 mac_off -= span.macro.second - span.macro.first;
15856 dump () && dump ("Macro span:%u [%u,%u):%u->%d(%u)", ix,
15857 span.macro.first, span.macro.second,
15858 span.macro.second - span.macro.first,
15859 span.macro_delta, span.macro.first + span.macro_delta);
15860
15861 line_map_ordinary const *omap
15862 = linemap_check_ordinary (linemap_lookup (line_table,
15863 span.ordinary.first));
15864 location_t base = MAP_START_LOCATION (omap);
15865
15866 /* Preserve the low MAX_RANGE bits of base by incrementing ORD_OFF. */
15867 unsigned low_bits = base & range_mask;
15868 if ((ord_off & range_mask) > low_bits)
15869 low_bits += range_mask + 1;
15870 ord_off = (ord_off & ~range_mask) + low_bits;
15871 span.ordinary_delta = ord_off - base;
15872
15873 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
15874 {
15875 location_t start_loc = MAP_START_LOCATION (omap);
15876 unsigned to = start_loc + span.ordinary_delta;
15877 location_t end_loc = MAP_START_LOCATION (omap + 1);
15878
15879 dump () && dump ("Ordinary span:%u [%u,%u):%u->%d(%u)", ix, start_loc,
15880 end_loc, end_loc - start_loc,
15881 span.ordinary_delta, to);
15882
15883 /* There should be no change in the low order bits. */
15884 gcc_checking_assert (((start_loc ^ to) & range_mask) == 0);
15885 }
15886 /* The ending serialized value. */
15887 ord_off = span.ordinary.second + span.ordinary_delta;
15888 }
15889
15890 dump () && dump ("Ordinary hwm:%u macro lwm:%u", ord_off, mac_off);
15891
15892 dump.outdent ();
15893
15894 info.max_range = max_range;
15895
15896 return info;
15897 }
15898
15899 bool
15900 module_state::read_prepare_maps (const module_state_config *cfg)
15901 {
15902 location_t ordinary = line_table->highest_location + 1;
15903 ordinary = ((ordinary + (1u << cfg->ordinary_loc_align))
15904 & ~((1u << cfg->ordinary_loc_align) - 1));
15905 ordinary += cfg->ordinary_locs;
15906
15907 location_t macro = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
15908 macro -= cfg->macro_locs;
15909
15910 if (ordinary < LINE_MAP_MAX_LOCATION_WITH_COLS
15911 && macro >= LINE_MAP_MAX_LOCATION)
15912 /* OK, we have enough locations. */
15913 return true;
15914
15915 ordinary_locs.first = ordinary_locs.second = 0;
15916 macro_locs.first = macro_locs.second = 0;
15917
15918 static bool informed = false;
15919 if (!informed)
15920 {
15921 /* Just give the notice once. */
15922 informed = true;
15923 inform (loc, "unable to represent further imported source locations");
15924 }
15925
15926 return false;
15927 }
15928
15929 /* Write the location maps. This also determines the shifts for the
15930 location spans. */
15931
15932 void
15933 module_state::write_ordinary_maps (elf_out *to, location_map_info &info,
15934 module_state_config *cfg, bool has_partitions,
15935 unsigned *crc_p)
15936 {
15937 dump () && dump ("Writing ordinary location maps");
15938 dump.indent ();
15939
15940 vec<const char *> filenames;
15941 filenames.create (20);
15942
15943 /* Determine the unique filenames. */
15944 // FIXME:QOI We should find the set of filenames when working out
15945 // which locations we actually need. See write_prepare_maps.
15946 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
15947 {
15948 loc_spans::span &span = spans[ix];
15949 line_map_ordinary const *omap
15950 = linemap_check_ordinary (linemap_lookup (line_table,
15951 span.ordinary.first));
15952
15953 /* We should exactly match up. */
15954 gcc_checking_assert (MAP_START_LOCATION (omap) == span.ordinary.first);
15955
15956 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
15957 {
15958 const char *fname = ORDINARY_MAP_FILE_NAME (omap);
15959
15960 /* We should never find a module linemap in an interval. */
15961 gcc_checking_assert (!MAP_MODULE_P (omap));
15962
15963 /* We expect very few filenames, so just an array. */
15964 for (unsigned jx = filenames.length (); jx--;)
15965 {
15966 const char *name = filenames[jx];
15967 if (0 == strcmp (name, fname))
15968 {
15969 /* Reset the linemap's name, because for things like
15970 preprocessed input we could have multple
15971 instances of the same name, and we'd rather not
15972 percolate that. */
15973 const_cast<line_map_ordinary *> (omap)->to_file = name;
15974 fname = NULL;
15975 break;
15976 }
15977 }
15978 if (fname)
15979 filenames.safe_push (fname);
15980 }
15981 }
15982
15983 bytes_out sec (to);
15984 sec.begin ();
15985
15986 /* Write the filenames. */
15987 unsigned len = filenames.length ();
15988 sec.u (len);
15989 dump () && dump ("%u source file names", len);
15990 for (unsigned ix = 0; ix != len; ix++)
15991 {
15992 const char *fname = filenames[ix];
15993 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
15994 sec.str (fname);
15995 }
15996
15997 location_t offset = spans[loc_spans::SPAN_FIRST].ordinary.first;
15998 location_t range_mask = (1u << info.max_range) - 1;
15999
16000 dump () && dump ("Ordinary maps:%u, range bits:%u, preserve:%x, zero:%u",
16001 info.num_maps.first, info.max_range, offset & range_mask,
16002 offset & ~range_mask);
16003 sec.u (info.num_maps.first); /* Num maps. */
16004 sec.u (info.max_range); /* Maximum range bits */
16005 sec.u (offset & range_mask); /* Bits to preserve. */
16006 sec.u (offset & ~range_mask);
16007
16008 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
16009 {
16010 loc_spans::span &span = spans[ix];
16011 line_map_ordinary const *omap
16012 = linemap_check_ordinary (linemap_lookup (line_table,
16013 span.ordinary.first));
16014 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
16015 {
16016 location_t start_loc = MAP_START_LOCATION (omap);
16017 unsigned to = start_loc + span.ordinary_delta;
16018
16019 dump (dumper::LOCATION)
16020 && dump ("Span:%u ordinary [%u,%u)->%u", ix, start_loc,
16021 MAP_START_LOCATION (omap + 1), to);
16022
16023 /* There should be no change in the low order bits. */
16024 gcc_checking_assert (((start_loc ^ to) & range_mask) == 0);
16025 sec.u (to);
16026
16027 /* Making accessors just for here, seems excessive. */
16028 sec.u (omap->reason);
16029 sec.u (omap->sysp);
16030 sec.u (omap->m_range_bits);
16031 sec.u (omap->m_column_and_range_bits - omap->m_range_bits);
16032
16033 const char *fname = ORDINARY_MAP_FILE_NAME (omap);
16034 for (unsigned ix = 0; ix != filenames.length (); ix++)
16035 if (filenames[ix] == fname)
16036 {
16037 sec.u (ix);
16038 break;
16039 }
16040 sec.u (ORDINARY_MAP_STARTING_LINE_NUMBER (omap));
16041
16042 /* Write the included from location, which means reading it
16043 while reading in the ordinary maps. So we'd better not
16044 be getting ahead of ourselves. */
16045 location_t from = linemap_included_from (omap);
16046 gcc_checking_assert (from < MAP_START_LOCATION (omap));
16047 if (from != UNKNOWN_LOCATION && has_partitions)
16048 {
16049 /* A partition's span will have a from pointing at a
16050 MODULE_INC. Find that map's from. */
16051 line_map_ordinary const *fmap
16052 = linemap_check_ordinary (linemap_lookup (line_table, from));
16053 if (MAP_MODULE_P (fmap))
16054 from = linemap_included_from (fmap);
16055 }
16056 write_location (sec, from);
16057 }
16058 /* The ending serialized value. */
16059 offset = MAP_START_LOCATION (omap) + span.ordinary_delta;
16060 }
16061 dump () && dump ("Ordinary location hwm:%u", offset);
16062 sec.u (offset);
16063
16064 // Record number of locations and alignment.
16065 cfg->ordinary_loc_align = info.max_range;
16066 cfg->ordinary_locs = offset;
16067
16068 filenames.release ();
16069
16070 sec.end (to, to->name (MOD_SNAME_PFX ".olm"), crc_p);
16071 dump.outdent ();
16072 }
16073
16074 void
16075 module_state::write_macro_maps (elf_out *to, location_map_info &info,
16076 module_state_config *cfg, unsigned *crc_p)
16077 {
16078 dump () && dump ("Writing macro location maps");
16079 dump.indent ();
16080
16081 bytes_out sec (to);
16082 sec.begin ();
16083
16084 dump () && dump ("Macro maps:%u", info.num_maps.second);
16085 sec.u (info.num_maps.second);
16086
16087 location_t offset = spans[loc_spans::SPAN_FIRST].macro.second;
16088 sec.u (offset);
16089
16090 unsigned macro_num = 0;
16091 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
16092 {
16093 loc_spans::span &span = spans[ix];
16094 if (span.macro.first == span.macro.second)
16095 continue;
16096
16097 for (unsigned first
16098 = linemap_lookup_macro_index (line_table, span.macro.second - 1);
16099 first < LINEMAPS_MACRO_USED (line_table);
16100 first++)
16101 {
16102 line_map_macro const *mmap
16103 = LINEMAPS_MACRO_MAP_AT (line_table, first);
16104 location_t start_loc = MAP_START_LOCATION (mmap);
16105 if (start_loc < span.macro.first)
16106 break;
16107 if (macro_num == info.num_maps.second)
16108 {
16109 /* We're ending on an empty macro expansion. The
16110 preprocessor doesn't prune such things. */
16111 // FIXME:QOI This is an example of the non-pruning of
16112 // locations. See write_prepare_maps.
16113 gcc_checking_assert (!mmap->n_tokens);
16114 continue;
16115 }
16116
16117 sec.u (offset);
16118 sec.u (mmap->n_tokens);
16119 sec.cpp_node (mmap->macro);
16120 write_location (sec, mmap->expansion);
16121 const location_t *locs = mmap->macro_locations;
16122 /* There are lots of identical runs. */
16123 location_t prev = UNKNOWN_LOCATION;
16124 unsigned count = 0;
16125 unsigned runs = 0;
16126 for (unsigned jx = mmap->n_tokens * 2; jx--;)
16127 {
16128 location_t tok_loc = locs[jx];
16129 if (tok_loc == prev)
16130 {
16131 count++;
16132 continue;
16133 }
16134 runs++;
16135 sec.u (count);
16136 count = 1;
16137 prev = tok_loc;
16138 write_location (sec, tok_loc);
16139 }
16140 sec.u (count);
16141 dump (dumper::LOCATION)
16142 && dump ("Span:%u macro:%u %I %u/%u*2 locations [%u,%u)->%u",
16143 ix, macro_num, identifier (mmap->macro),
16144 runs, mmap->n_tokens,
16145 start_loc, start_loc + mmap->n_tokens,
16146 start_loc + span.macro_delta);
16147 macro_num++;
16148 offset -= mmap->n_tokens;
16149 gcc_checking_assert (offset == start_loc + span.macro_delta);
16150 }
16151 }
16152 dump () && dump ("Macro location lwm:%u", offset);
16153 sec.u (offset);
16154 gcc_assert (macro_num == info.num_maps.second);
16155
16156 cfg->macro_locs = MAX_LOCATION_T + 1 - offset;
16157
16158 sec.end (to, to->name (MOD_SNAME_PFX ".mlm"), crc_p);
16159 dump.outdent ();
16160 }
16161
16162 bool
16163 module_state::read_ordinary_maps ()
16164 {
16165 bytes_in sec;
16166
16167 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".olm"))
16168 return false;
16169 dump () && dump ("Reading ordinary location maps");
16170 dump.indent ();
16171
16172 /* Read the filename table. */
16173 unsigned len = sec.u ();
16174 dump () && dump ("%u source file names", len);
16175 vec<const char *> filenames;
16176 filenames.create (len);
16177 for (unsigned ix = 0; ix != len; ix++)
16178 {
16179 size_t l;
16180 const char *buf = sec.str (&l);
16181 char *fname = XNEWVEC (char, l + 1);
16182 memcpy (fname, buf, l + 1);
16183 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16184 /* We leak these names into the line-map table. But it
16185 doesn't own them. */
16186 filenames.quick_push (fname);
16187 }
16188
16189 unsigned num_ordinary = sec.u ();
16190 unsigned max_range = sec.u ();
16191 unsigned low_bits = sec.u ();
16192 location_t zero = sec.u ();
16193 location_t range_mask = (1u << max_range) - 1;
16194
16195 dump () && dump ("Ordinary maps:%u, range bits:%u, preserve:%x, zero:%u",
16196 num_ordinary, max_range, low_bits, zero);
16197
16198 location_t offset = line_table->highest_location + 1;
16199 /* Ensure offset doesn't go backwards at the start. */
16200 if ((offset & range_mask) > low_bits)
16201 offset += range_mask + 1;
16202 offset = (offset & ~range_mask);
16203
16204 bool propagated = spans.maybe_propagate (this, offset + low_bits);
16205
16206 line_map_ordinary *maps = static_cast<line_map_ordinary *>
16207 (line_map_new_raw (line_table, false, num_ordinary));
16208
16209 location_t lwm = offset;
16210 slurp->loc_deltas.first = offset - zero;
16211 ordinary_locs.first = zero + low_bits + slurp->loc_deltas.first;
16212 dump () && dump ("Ordinary loc delta %d", slurp->loc_deltas.first);
16213
16214 for (unsigned ix = 0; ix != num_ordinary && !sec.get_overrun (); ix++)
16215 {
16216 line_map_ordinary *map = &maps[ix];
16217 unsigned hwm = sec.u ();
16218
16219 /* Record the current HWM so that the below read_location is
16220 ok. */
16221 ordinary_locs.second = hwm + slurp->loc_deltas.first;
16222 map->start_location = hwm + (offset - zero);
16223 if (map->start_location < lwm)
16224 sec.set_overrun ();
16225 lwm = map->start_location;
16226 dump (dumper::LOCATION) && dump ("Map:%u %u->%u", ix, hwm, lwm);
16227 map->reason = lc_reason (sec.u ());
16228 map->sysp = sec.u ();
16229 map->m_range_bits = sec.u ();
16230 map->m_column_and_range_bits = map->m_range_bits + sec.u ();
16231
16232 unsigned fnum = sec.u ();
16233 map->to_file = (fnum < filenames.length () ? filenames[fnum] : "");
16234 map->to_line = sec.u ();
16235
16236 /* Root the outermost map at our location. */
16237 location_t from = read_location (sec);
16238 map->included_from = from != UNKNOWN_LOCATION ? from : loc;
16239 }
16240
16241 location_t hwm = sec.u ();
16242 ordinary_locs.second = hwm + slurp->loc_deltas.first;
16243
16244 /* highest_location is the one handed out, not the next one to
16245 hand out. */
16246 line_table->highest_location = ordinary_locs.second - 1;
16247
16248 if (line_table->highest_location >= LINE_MAP_MAX_LOCATION_WITH_COLS)
16249 /* We shouldn't run out of locations, as we checked before
16250 starting. */
16251 sec.set_overrun ();
16252 dump () && dump ("Ordinary location hwm:%u", ordinary_locs.second);
16253
16254 if (propagated)
16255 spans.close ();
16256
16257 filenames.release ();
16258
16259 dump.outdent ();
16260 if (!sec.end (from ()))
16261 return false;
16262
16263 return true;
16264 }
16265
16266 bool
16267 module_state::read_macro_maps ()
16268 {
16269 bytes_in sec;
16270
16271 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".mlm"))
16272 return false;
16273 dump () && dump ("Reading macro location maps");
16274 dump.indent ();
16275
16276 unsigned num_macros = sec.u ();
16277 location_t zero = sec.u ();
16278 dump () && dump ("Macro maps:%u zero:%u", num_macros, zero);
16279
16280 bool propagated = spans.maybe_propagate (this);
16281
16282 location_t offset = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
16283 slurp->loc_deltas.second = zero - offset;
16284 macro_locs.second = zero - slurp->loc_deltas.second;
16285 dump () && dump ("Macro loc delta %d", slurp->loc_deltas.second);
16286
16287 for (unsigned ix = 0; ix != num_macros && !sec.get_overrun (); ix++)
16288 {
16289 unsigned lwm = sec.u ();
16290 /* Record the current LWM so that the below read_location is
16291 ok. */
16292 macro_locs.first = lwm - slurp->loc_deltas.second;
16293
16294 unsigned n_tokens = sec.u ();
16295 cpp_hashnode *node = sec.cpp_node ();
16296 location_t exp_loc = read_location (sec);
16297
16298 const line_map_macro *macro
16299 = linemap_enter_macro (line_table, node, exp_loc, n_tokens);
16300 if (!macro)
16301 /* We shouldn't run out of locations, as we checked that we
16302 had enough before starting. */
16303 break;
16304
16305 location_t *locs = macro->macro_locations;
16306 location_t tok_loc = UNKNOWN_LOCATION;
16307 unsigned count = sec.u ();
16308 unsigned runs = 0;
16309 for (unsigned jx = macro->n_tokens * 2; jx-- && !sec.get_overrun ();)
16310 {
16311 while (!count-- && !sec.get_overrun ())
16312 {
16313 runs++;
16314 tok_loc = read_location (sec);
16315 count = sec.u ();
16316 }
16317 locs[jx] = tok_loc;
16318 }
16319 if (count)
16320 sec.set_overrun ();
16321 dump (dumper::LOCATION)
16322 && dump ("Macro:%u %I %u/%u*2 locations [%u,%u)",
16323 ix, identifier (node), runs, n_tokens,
16324 MAP_START_LOCATION (macro),
16325 MAP_START_LOCATION (macro) + n_tokens);
16326 }
16327 location_t lwm = sec.u ();
16328 macro_locs.first = lwm - slurp->loc_deltas.second;
16329
16330 dump () && dump ("Macro location lwm:%u", macro_locs.first);
16331
16332 if (propagated)
16333 spans.close ();
16334
16335 dump.outdent ();
16336 if (!sec.end (from ()))
16337 return false;
16338
16339 return true;
16340 }
16341
16342 /* Serialize the definition of MACRO. */
16343
16344 void
16345 module_state::write_define (bytes_out &sec, const cpp_macro *macro, bool located)
16346 {
16347 sec.u (macro->count);
16348
16349 sec.b (macro->fun_like);
16350 sec.b (macro->variadic);
16351 sec.b (macro->syshdr);
16352 sec.bflush ();
16353
16354 if (located)
16355 write_location (sec, macro->line);
16356 if (macro->fun_like)
16357 {
16358 sec.u (macro->paramc);
16359 const cpp_hashnode *const *parms = macro->parm.params;
16360 for (unsigned ix = 0; ix != macro->paramc; ix++)
16361 sec.cpp_node (parms[ix]);
16362 }
16363
16364 unsigned len = 0;
16365 for (unsigned ix = 0; ix != macro->count; ix++)
16366 {
16367 const cpp_token *token = &macro->exp.tokens[ix];
16368 if (located)
16369 write_location (sec, token->src_loc);
16370 sec.u (token->type);
16371 sec.u (token->flags);
16372 switch (cpp_token_val_index (token))
16373 {
16374 default:
16375 gcc_unreachable ();
16376
16377 case CPP_TOKEN_FLD_ARG_NO:
16378 /* An argument reference. */
16379 sec.u (token->val.macro_arg.arg_no);
16380 sec.cpp_node (token->val.macro_arg.spelling);
16381 break;
16382
16383 case CPP_TOKEN_FLD_NODE:
16384 /* An identifier. */
16385 sec.cpp_node (token->val.node.node);
16386 if (token->val.node.spelling == token->val.node.node)
16387 /* The spelling will usually be the same. so optimize
16388 that. */
16389 sec.str (NULL, 0);
16390 else
16391 sec.cpp_node (token->val.node.spelling);
16392 break;
16393
16394 case CPP_TOKEN_FLD_NONE:
16395 break;
16396
16397 case CPP_TOKEN_FLD_STR:
16398 /* A string, number or comment. Not always NUL terminated,
16399 we stream out in a single contatenation with embedded
16400 NULs as that's a safe default. */
16401 len += token->val.str.len + 1;
16402 sec.u (token->val.str.len);
16403 break;
16404
16405 case CPP_TOKEN_FLD_SOURCE:
16406 case CPP_TOKEN_FLD_TOKEN_NO:
16407 case CPP_TOKEN_FLD_PRAGMA:
16408 /* These do not occur inside a macro itself. */
16409 gcc_unreachable ();
16410 }
16411 }
16412
16413 if (len)
16414 {
16415 char *ptr = reinterpret_cast<char *> (sec.buf (len));
16416 len = 0;
16417 for (unsigned ix = 0; ix != macro->count; ix++)
16418 {
16419 const cpp_token *token = &macro->exp.tokens[ix];
16420 if (cpp_token_val_index (token) == CPP_TOKEN_FLD_STR)
16421 {
16422 memcpy (ptr + len, token->val.str.text,
16423 token->val.str.len);
16424 len += token->val.str.len;
16425 ptr[len++] = 0;
16426 }
16427 }
16428 }
16429 }
16430
16431 /* Read a macro definition. */
16432
16433 cpp_macro *
16434 module_state::read_define (bytes_in &sec, cpp_reader *reader, bool located) const
16435 {
16436 unsigned count = sec.u ();
16437 /* We rely on knowing cpp_reader's hash table is ident_hash, and
16438 it's subobject allocator is stringpool_ggc_alloc and that is just
16439 a wrapper for ggc_alloc_atomic. */
16440 cpp_macro *macro
16441 = (cpp_macro *)ggc_alloc_atomic (sizeof (cpp_macro)
16442 + sizeof (cpp_token) * (count - !!count));
16443 memset (macro, 0, sizeof (cpp_macro) + sizeof (cpp_token) * (count - !!count));
16444
16445 macro->count = count;
16446 macro->kind = cmk_macro;
16447 macro->imported_p = true;
16448
16449 macro->fun_like = sec.b ();
16450 macro->variadic = sec.b ();
16451 macro->syshdr = sec.b ();
16452 sec.bflush ();
16453
16454 macro->line = located ? read_location (sec) : loc;
16455
16456 if (macro->fun_like)
16457 {
16458 unsigned paramc = sec.u ();
16459 cpp_hashnode **params
16460 = (cpp_hashnode **)ggc_alloc_atomic (sizeof (cpp_hashnode *) * paramc);
16461 macro->paramc = paramc;
16462 macro->parm.params = params;
16463 for (unsigned ix = 0; ix != paramc; ix++)
16464 params[ix] = sec.cpp_node ();
16465 }
16466
16467 unsigned len = 0;
16468 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
16469 {
16470 cpp_token *token = &macro->exp.tokens[ix];
16471 token->src_loc = located ? read_location (sec) : loc;
16472 token->type = cpp_ttype (sec.u ());
16473 token->flags = sec.u ();
16474 switch (cpp_token_val_index (token))
16475 {
16476 default:
16477 sec.set_overrun ();
16478 break;
16479
16480 case CPP_TOKEN_FLD_ARG_NO:
16481 /* An argument reference. */
16482 {
16483 unsigned arg_no = sec.u ();
16484 if (arg_no - 1 >= macro->paramc)
16485 sec.set_overrun ();
16486 token->val.macro_arg.arg_no = arg_no;
16487 token->val.macro_arg.spelling = sec.cpp_node ();
16488 }
16489 break;
16490
16491 case CPP_TOKEN_FLD_NODE:
16492 /* An identifier. */
16493 token->val.node.node = sec.cpp_node ();
16494 token->val.node.spelling = sec.cpp_node ();
16495 if (!token->val.node.spelling)
16496 token->val.node.spelling = token->val.node.node;
16497 break;
16498
16499 case CPP_TOKEN_FLD_NONE:
16500 break;
16501
16502 case CPP_TOKEN_FLD_STR:
16503 /* A string, number or comment. */
16504 token->val.str.len = sec.u ();
16505 len += token->val.str.len + 1;
16506 break;
16507 }
16508 }
16509
16510 if (len)
16511 if (const char *ptr = reinterpret_cast<const char *> (sec.buf (len)))
16512 {
16513 /* There should be a final NUL. */
16514 if (ptr[len-1])
16515 sec.set_overrun ();
16516 /* cpp_alloc_token_string will add a final NUL. */
16517 const unsigned char *buf
16518 = cpp_alloc_token_string (reader, (const unsigned char *)ptr, len - 1);
16519 len = 0;
16520 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
16521 {
16522 cpp_token *token = &macro->exp.tokens[ix];
16523 if (cpp_token_val_index (token) == CPP_TOKEN_FLD_STR)
16524 {
16525 token->val.str.text = buf + len;
16526 len += token->val.str.len;
16527 if (buf[len++])
16528 sec.set_overrun ();
16529 }
16530 }
16531 }
16532
16533 if (sec.get_overrun ())
16534 return NULL;
16535 return macro;
16536 }
16537
16538 /* Exported macro data. */
16539 struct macro_export {
16540 cpp_macro *def;
16541 location_t undef_loc;
16542
16543 macro_export ()
16544 :def (NULL), undef_loc (UNKNOWN_LOCATION)
16545 {
16546 }
16547 };
16548
16549 /* Imported macro data. */
16550 class macro_import {
16551 public:
16552 struct slot {
16553 #if defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8
16554 int offset;
16555 #endif
16556 /* We need to ensure we don't use the LSB for representation, as
16557 that's the union discriminator below. */
16558 unsigned bits;
16559
16560 #if !(defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8)
16561 int offset;
16562 #endif
16563
16564 public:
16565 enum Layout {
16566 L_DEF = 1,
16567 L_UNDEF = 2,
16568 L_BOTH = 3,
16569 L_MODULE_SHIFT = 2
16570 };
16571
16572 public:
16573 /* Not a regular ctor, because we put it in a union, and that's
16574 not allowed in C++ 98. */
16575 static slot ctor (unsigned module, unsigned defness)
16576 {
16577 gcc_checking_assert (defness);
16578 slot s;
16579 s.bits = defness | (module << L_MODULE_SHIFT);
16580 s.offset = -1;
16581 return s;
16582 }
16583
16584 public:
16585 unsigned get_defness () const
16586 {
16587 return bits & L_BOTH;
16588 }
16589 unsigned get_module () const
16590 {
16591 return bits >> L_MODULE_SHIFT;
16592 }
16593 void become_undef ()
16594 {
16595 bits &= ~unsigned (L_DEF);
16596 bits |= unsigned (L_UNDEF);
16597 }
16598 };
16599
16600 private:
16601 typedef vec<slot, va_heap, vl_embed> ary_t;
16602 union either {
16603 /* Discriminated by bits 0|1 != 0. The expected case is that
16604 there will be exactly one slot per macro, hence the effort of
16605 packing that. */
16606 ary_t *ary;
16607 slot single;
16608 } u;
16609
16610 public:
16611 macro_import ()
16612 {
16613 u.ary = NULL;
16614 }
16615
16616 private:
16617 bool single_p () const
16618 {
16619 return u.single.bits & slot::L_BOTH;
16620 }
16621 bool occupied_p () const
16622 {
16623 return u.ary != NULL;
16624 }
16625
16626 public:
16627 unsigned length () const
16628 {
16629 gcc_checking_assert (occupied_p ());
16630 return single_p () ? 1 : u.ary->length ();
16631 }
16632 slot &operator[] (unsigned ix)
16633 {
16634 gcc_checking_assert (occupied_p ());
16635 if (single_p ())
16636 {
16637 gcc_checking_assert (!ix);
16638 return u.single;
16639 }
16640 else
16641 return (*u.ary)[ix];
16642 }
16643
16644 public:
16645 slot &exported ();
16646 slot &append (unsigned module, unsigned defness);
16647 };
16648
16649 /* O is a new import to append to the list for. If we're an empty
16650 set, initialize us. */
16651
16652 macro_import::slot &
16653 macro_import::append (unsigned module, unsigned defness)
16654 {
16655 if (!occupied_p ())
16656 {
16657 u.single = slot::ctor (module, defness);
16658 return u.single;
16659 }
16660 else
16661 {
16662 bool single = single_p ();
16663 ary_t *m = single ? NULL : u.ary;
16664 vec_safe_reserve (m, 1 + single);
16665 if (single)
16666 m->quick_push (u.single);
16667 u.ary = m;
16668 return *u.ary->quick_push (slot::ctor (module, defness));
16669 }
16670 }
16671
16672 /* We're going to export something. Make sure the first import slot
16673 is us. */
16674
16675 macro_import::slot &
16676 macro_import::exported ()
16677 {
16678 if (occupied_p () && !(*this)[0].get_module ())
16679 {
16680 slot &res = (*this)[0];
16681 res.bits |= slot::L_DEF;
16682 return res;
16683 }
16684
16685 slot *a = &append (0, slot::L_DEF);
16686 if (!single_p ())
16687 {
16688 slot &f = (*this)[0];
16689 std::swap (f, *a);
16690 a = &f;
16691 }
16692 return *a;
16693 }
16694
16695 /* The import (&exported) macros. cpp_hasnode's deferred field
16696 indexes this array (offset by 1, so zero means 'not present'. */
16697
16698 static vec<macro_import, va_heap, vl_embed> *macro_imports;
16699
16700 /* The exported macros. A macro_import slot's zeroth element's offset
16701 indexes this array. If the zeroth slot is not for module zero,
16702 there is no export. */
16703
16704 static vec<macro_export, va_heap, vl_embed> *macro_exports;
16705
16706 /* The reachable set of header imports from this TU. */
16707
16708 static GTY(()) bitmap headers;
16709
16710 /* Get the (possibly empty) macro imports for NODE. */
16711
16712 static macro_import &
16713 get_macro_imports (cpp_hashnode *node)
16714 {
16715 if (node->deferred)
16716 return (*macro_imports)[node->deferred - 1];
16717
16718 vec_safe_reserve (macro_imports, 1);
16719 node->deferred = macro_imports->length () + 1;
16720 return *vec_safe_push (macro_imports, macro_import ());
16721 }
16722
16723 /* Get the macro export for export EXP of NODE. */
16724
16725 static macro_export &
16726 get_macro_export (macro_import::slot &slot)
16727 {
16728 if (slot.offset >= 0)
16729 return (*macro_exports)[slot.offset];
16730
16731 vec_safe_reserve (macro_exports, 1);
16732 slot.offset = macro_exports->length ();
16733 return *macro_exports->quick_push (macro_export ());
16734 }
16735
16736 /* If NODE is an exportable macro, add it to the export set. */
16737
16738 static int
16739 maybe_add_macro (cpp_reader *, cpp_hashnode *node, void *data_)
16740 {
16741 bool exporting = false;
16742
16743 if (cpp_user_macro_p (node))
16744 if (cpp_macro *macro = node->value.macro)
16745 /* Ignore imported, builtins, command line and forced header macros. */
16746 if (!macro->imported_p
16747 && !macro->lazy && macro->line >= spans.main_start ())
16748 {
16749 gcc_checking_assert (macro->kind == cmk_macro);
16750 /* I don't want to deal with this corner case, that I suspect is
16751 a devil's advocate reading of the standard. */
16752 gcc_checking_assert (!macro->extra_tokens);
16753
16754 macro_import::slot &slot = get_macro_imports (node).exported ();
16755 macro_export &exp = get_macro_export (slot);
16756 exp.def = macro;
16757 exporting = true;
16758 }
16759
16760 if (!exporting && node->deferred)
16761 {
16762 macro_import &imports = (*macro_imports)[node->deferred - 1];
16763 macro_import::slot &slot = imports[0];
16764 if (!slot.get_module ())
16765 {
16766 gcc_checking_assert (slot.get_defness ());
16767 exporting = true;
16768 }
16769 }
16770
16771 if (exporting)
16772 static_cast<vec<cpp_hashnode *> *> (data_)->safe_push (node);
16773
16774 return 1; /* Don't stop. */
16775 }
16776
16777 /* Order cpp_hashnodes A_ and B_ by their exported macro locations. */
16778
16779 static int
16780 macro_loc_cmp (const void *a_, const void *b_)
16781 {
16782 const cpp_hashnode *node_a = *(const cpp_hashnode *const *)a_;
16783 macro_import &import_a = (*macro_imports)[node_a->deferred - 1];
16784 const macro_export &export_a = (*macro_exports)[import_a[0].offset];
16785 location_t loc_a = export_a.def ? export_a.def->line : export_a.undef_loc;
16786
16787 const cpp_hashnode *node_b = *(const cpp_hashnode *const *)b_;
16788 macro_import &import_b = (*macro_imports)[node_b->deferred - 1];
16789 const macro_export &export_b = (*macro_exports)[import_b[0].offset];
16790 location_t loc_b = export_b.def ? export_b.def->line : export_b.undef_loc;
16791
16792 if (loc_a < loc_b)
16793 return +1;
16794 else if (loc_a > loc_b)
16795 return -1;
16796 else
16797 return 0;
16798 }
16799
16800 /* Write out the exported defines. This is two sections, one
16801 containing the definitions, the other a table of node names. */
16802
16803 unsigned
16804 module_state::write_macros (elf_out *to, cpp_reader *reader, unsigned *crc_p)
16805 {
16806 dump () && dump ("Writing macros");
16807 dump.indent ();
16808
16809 vec<cpp_hashnode *> macros;
16810 macros.create (100);
16811 cpp_forall_identifiers (reader, maybe_add_macro, &macros);
16812
16813 dump (dumper::MACRO) && dump ("No more than %u macros", macros.length ());
16814
16815 macros.qsort (macro_loc_cmp);
16816
16817 /* Write the defs */
16818 bytes_out sec (to);
16819 sec.begin ();
16820
16821 unsigned count = 0;
16822 for (unsigned ix = macros.length (); ix--;)
16823 {
16824 cpp_hashnode *node = macros[ix];
16825 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
16826 gcc_assert (!slot.get_module () && slot.get_defness ());
16827
16828 macro_export &mac = (*macro_exports)[slot.offset];
16829 gcc_assert (!!(slot.get_defness () & macro_import::slot::L_UNDEF)
16830 == (mac.undef_loc != UNKNOWN_LOCATION)
16831 && !!(slot.get_defness () & macro_import::slot::L_DEF)
16832 == (mac.def != NULL));
16833
16834 if (IDENTIFIER_KEYWORD_P (identifier (node)))
16835 {
16836 warning_at (mac.def->line, 0,
16837 "not exporting %<#define %E%> as it is a keyword",
16838 identifier (node));
16839 slot.offset = 0;
16840 continue;
16841 }
16842
16843 count++;
16844 slot.offset = sec.pos;
16845 dump (dumper::MACRO)
16846 && dump ("Writing macro %s%s%s %I at %u",
16847 slot.get_defness () & macro_import::slot::L_UNDEF
16848 ? "#undef" : "",
16849 slot.get_defness () == macro_import::slot::L_BOTH
16850 ? " & " : "",
16851 slot.get_defness () & macro_import::slot::L_DEF
16852 ? "#define" : "",
16853 identifier (node), slot.offset);
16854 if (mac.undef_loc != UNKNOWN_LOCATION)
16855 write_location (sec, mac.undef_loc);
16856 if (mac.def)
16857 write_define (sec, mac.def);
16858 }
16859 sec.end (to, to->name (MOD_SNAME_PFX ".def"), crc_p);
16860
16861 if (count)
16862 {
16863 /* Write the table. */
16864 bytes_out sec (to);
16865 sec.begin ();
16866 sec.u (count);
16867
16868 for (unsigned ix = macros.length (); ix--;)
16869 {
16870 const cpp_hashnode *node = macros[ix];
16871 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
16872
16873 if (slot.offset)
16874 {
16875 sec.cpp_node (node);
16876 sec.u (slot.get_defness ());
16877 sec.u (slot.offset);
16878 }
16879 }
16880 sec.end (to, to->name (MOD_SNAME_PFX ".mac"), crc_p);
16881 }
16882
16883 macros.release ();
16884 dump.outdent ();
16885 return count;
16886 }
16887
16888 bool
16889 module_state::read_macros ()
16890 {
16891 /* Get the def section. */
16892 if (!slurp->macro_defs.begin (loc, from (), MOD_SNAME_PFX ".def"))
16893 return false;
16894
16895 /* Get the tbl section, if there are defs. */
16896 if (slurp->macro_defs.more_p ()
16897 && !slurp->macro_tbl.begin (loc, from (), MOD_SNAME_PFX ".mac"))
16898 return false;
16899
16900 return true;
16901 }
16902
16903 /* Install the macro name table. */
16904
16905 void
16906 module_state::install_macros ()
16907 {
16908 bytes_in &sec = slurp->macro_tbl;
16909 if (!sec.size)
16910 return;
16911
16912 dump () && dump ("Reading macro table %M", this);
16913 dump.indent ();
16914
16915 unsigned count = sec.u ();
16916 dump () && dump ("%u macros", count);
16917 while (count--)
16918 {
16919 cpp_hashnode *node = sec.cpp_node ();
16920 macro_import &imp = get_macro_imports (node);
16921 unsigned flags = sec.u () & macro_import::slot::L_BOTH;
16922 if (!flags)
16923 sec.set_overrun ();
16924
16925 if (sec.get_overrun ())
16926 break;
16927
16928 macro_import::slot &slot = imp.append (mod, flags);
16929 slot.offset = sec.u ();
16930
16931 dump (dumper::MACRO)
16932 && dump ("Read %s macro %s%s%s %I at %u",
16933 imp.length () > 1 ? "add" : "new",
16934 flags & macro_import::slot::L_UNDEF ? "#undef" : "",
16935 flags == macro_import::slot::L_BOTH ? " & " : "",
16936 flags & macro_import::slot::L_DEF ? "#define" : "",
16937 identifier (node), slot.offset);
16938
16939 /* We'll leak an imported definition's TOKEN_FLD_STR's data
16940 here. But that only happens when we've had to resolve the
16941 deferred macro before this import -- why are you doing
16942 that? */
16943 if (cpp_macro *cur = cpp_set_deferred_macro (node))
16944 if (!cur->imported_p)
16945 {
16946 macro_import::slot &slot = imp.exported ();
16947 macro_export &exp = get_macro_export (slot);
16948 exp.def = cur;
16949 dump (dumper::MACRO)
16950 && dump ("Saving current #define %I", identifier (node));
16951 }
16952 }
16953
16954 /* We're now done with the table. */
16955 elf_in::release (slurp->from, sec);
16956
16957 dump.outdent ();
16958 }
16959
16960 /* Import the transitive macros. */
16961
16962 void
16963 module_state::import_macros ()
16964 {
16965 bitmap_ior_into (headers, slurp->headers);
16966
16967 bitmap_iterator bititer;
16968 unsigned bitnum;
16969 EXECUTE_IF_SET_IN_BITMAP (slurp->headers, 0, bitnum, bititer)
16970 (*modules)[bitnum]->install_macros ();
16971 }
16972
16973 /* NODE is being undefined at LOC. Record it in the export table, if
16974 necessary. */
16975
16976 void
16977 module_state::undef_macro (cpp_reader *, location_t loc, cpp_hashnode *node)
16978 {
16979 if (!node->deferred)
16980 /* The macro is not imported, so our undef is irrelevant. */
16981 return;
16982
16983 unsigned n = dump.push (NULL);
16984
16985 macro_import::slot &slot = (*macro_imports)[node->deferred - 1].exported ();
16986 macro_export &exp = get_macro_export (slot);
16987
16988 exp.undef_loc = loc;
16989 slot.become_undef ();
16990 exp.def = NULL;
16991
16992 dump (dumper::MACRO) && dump ("Recording macro #undef %I", identifier (node));
16993
16994 dump.pop (n);
16995 }
16996
16997 /* NODE is a deferred macro node. Determine the definition and return
16998 it, with NULL if undefined. May issue diagnostics.
16999
17000 This can leak memory, when merging declarations -- the string
17001 contents (TOKEN_FLD_STR) of each definition are allocated in
17002 unreclaimable cpp objstack. Only one will win. However, I do not
17003 expect this to be common -- mostly macros have a single point of
17004 definition. Perhaps we could restore the objstack to its position
17005 after the first imported definition (if that wins)? The macros
17006 themselves are GC'd. */
17007
17008 cpp_macro *
17009 module_state::deferred_macro (cpp_reader *reader, location_t loc,
17010 cpp_hashnode *node)
17011 {
17012 macro_import &imports = (*macro_imports)[node->deferred - 1];
17013
17014 unsigned n = dump.push (NULL);
17015 dump (dumper::MACRO) && dump ("Deferred macro %I", identifier (node));
17016
17017 bitmap visible (BITMAP_GGC_ALLOC ());
17018
17019 if (!((imports[0].get_defness () & macro_import::slot::L_UNDEF)
17020 && !imports[0].get_module ()))
17021 {
17022 /* Calculate the set of visible header imports. */
17023 bitmap_copy (visible, headers);
17024 for (unsigned ix = imports.length (); ix--;)
17025 {
17026 const macro_import::slot &slot = imports[ix];
17027 unsigned mod = slot.get_module ();
17028 if ((slot.get_defness () & macro_import::slot::L_UNDEF)
17029 && bitmap_bit_p (visible, mod))
17030 {
17031 bitmap arg = mod ? (*modules)[mod]->slurp->headers : headers;
17032 bitmap_and_compl_into (visible, arg);
17033 bitmap_set_bit (visible, mod);
17034 }
17035 }
17036 }
17037 bitmap_set_bit (visible, 0);
17038
17039 /* Now find the macros that are still visible. */
17040 bool failed = false;
17041 cpp_macro *def = NULL;
17042 vec<macro_export> defs;
17043 defs.create (imports.length ());
17044 for (unsigned ix = imports.length (); ix--;)
17045 {
17046 const macro_import::slot &slot = imports[ix];
17047 unsigned mod = slot.get_module ();
17048 if (bitmap_bit_p (visible, mod))
17049 {
17050 macro_export *pushed = NULL;
17051 if (mod)
17052 {
17053 const module_state *imp = (*modules)[mod];
17054 bytes_in &sec = imp->slurp->macro_defs;
17055 if (!sec.get_overrun ())
17056 {
17057 dump (dumper::MACRO)
17058 && dump ("Reading macro %s%s%s %I module %M at %u",
17059 slot.get_defness () & macro_import::slot::L_UNDEF
17060 ? "#undef" : "",
17061 slot.get_defness () == macro_import::slot::L_BOTH
17062 ? " & " : "",
17063 slot.get_defness () & macro_import::slot::L_DEF
17064 ? "#define" : "",
17065 identifier (node), imp, slot.offset);
17066 sec.random_access (slot.offset);
17067
17068 macro_export exp;
17069 if (slot.get_defness () & macro_import::slot::L_UNDEF)
17070 exp.undef_loc = imp->read_location (sec);
17071 if (slot.get_defness () & macro_import::slot::L_DEF)
17072 exp.def = imp->read_define (sec, reader);
17073 if (sec.get_overrun ())
17074 error_at (loc, "macro definitions of %qE corrupted",
17075 imp->name);
17076 else
17077 pushed = defs.quick_push (exp);
17078 }
17079 }
17080 else
17081 pushed = defs.quick_push ((*macro_exports)[slot.offset]);
17082 if (pushed && pushed->def)
17083 {
17084 if (!def)
17085 def = pushed->def;
17086 else if (cpp_compare_macros (def, pushed->def))
17087 failed = true;
17088 }
17089 }
17090 }
17091
17092 if (failed)
17093 {
17094 /* If LOC is the first loc, this is the end of file check, which
17095 is a warning. */
17096 if (loc == MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0)))
17097 warning_at (loc, OPT_Winvalid_imported_macros,
17098 "inconsistent imported macro definition %qE",
17099 identifier (node));
17100 else
17101 error_at (loc, "inconsistent imported macro definition %qE",
17102 identifier (node));
17103 for (unsigned ix = defs.length (); ix--;)
17104 {
17105 macro_export &exp = defs[ix];
17106 if (exp.undef_loc)
17107 inform (exp.undef_loc, "%<#undef %E%>", identifier (node));
17108 if (exp.def)
17109 inform (exp.def->line, "%<#define %s%>",
17110 cpp_macro_definition (reader, node, exp.def));
17111 }
17112 def = NULL;
17113 }
17114
17115 defs.release ();
17116
17117 dump.pop (n);
17118
17119 return def;
17120 }
17121
17122 /* Stream the static aggregates. Sadly some headers (ahem:
17123 iostream) contain static vars, and rely on them to run global
17124 ctors. */
17125 unsigned
17126 module_state::write_inits (elf_out *to, depset::hash &table, unsigned *crc_ptr)
17127 {
17128 if (!static_aggregates && !tls_aggregates)
17129 return 0;
17130
17131 dump () && dump ("Writing initializers");
17132 dump.indent ();
17133
17134 static_aggregates = nreverse (static_aggregates);
17135 tls_aggregates = nreverse (tls_aggregates);
17136
17137 unsigned count = 0;
17138 trees_out sec (to, this, table, ~0u);
17139 sec.begin ();
17140
17141 tree list = static_aggregates;
17142 for (int passes = 0; passes != 2; passes++)
17143 {
17144 for (tree init = list; init; init = TREE_CHAIN (init), count++)
17145 if (TREE_LANG_FLAG_0 (init))
17146 {
17147 tree decl = TREE_VALUE (init);
17148
17149 dump ("Initializer:%u for %N", count, decl);
17150 sec.tree_node (decl);
17151 }
17152
17153 list = tls_aggregates;
17154 }
17155
17156 sec.end (to, to->name (MOD_SNAME_PFX ".ini"), crc_ptr);
17157 dump.outdent ();
17158
17159 return count;
17160 }
17161
17162 bool
17163 module_state::read_inits (unsigned count)
17164 {
17165 trees_in sec (this);
17166 if (!sec.begin (loc, from (), from ()->find (MOD_SNAME_PFX ".ini")))
17167 return false;
17168 dump () && dump ("Reading %u initializers", count);
17169 dump.indent ();
17170
17171 for (unsigned ix = 0; ix != count; ix++)
17172 {
17173 /* Merely referencing the decl causes its initializer to be read
17174 and added to the correct list. */
17175 tree decl = sec.tree_node ();
17176
17177 if (sec.get_overrun ())
17178 break;
17179 if (decl)
17180 dump ("Initializer:%u for %N", count, decl);
17181 }
17182 dump.outdent ();
17183 if (!sec.end (from ()))
17184 return false;
17185 return true;
17186 }
17187
17188 void
17189 module_state::write_counts (elf_out *to, unsigned counts[MSC_HWM],
17190 unsigned *crc_ptr)
17191 {
17192 bytes_out cfg (to);
17193
17194 cfg.begin ();
17195
17196 for (unsigned ix = MSC_HWM; ix--;)
17197 cfg.u (counts[ix]);
17198
17199 if (dump ())
17200 {
17201 dump ("Cluster sections are [%u,%u)",
17202 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17203 dump ("Bindings %u", counts[MSC_bindings]);
17204 dump ("Pendings %u", counts[MSC_pendings]);
17205 dump ("Entities %u", counts[MSC_entities]);
17206 dump ("Namespaces %u", counts[MSC_namespaces]);
17207 dump ("Macros %u", counts[MSC_macros]);
17208 dump ("Initializers %u", counts[MSC_inits]);
17209 }
17210
17211 cfg.end (to, to->name (MOD_SNAME_PFX ".cnt"), crc_ptr);
17212 }
17213
17214 bool
17215 module_state::read_counts (unsigned counts[MSC_HWM])
17216 {
17217 bytes_in cfg;
17218
17219 if (!cfg.begin (loc, from (), MOD_SNAME_PFX ".cnt"))
17220 return false;
17221
17222 for (unsigned ix = MSC_HWM; ix--;)
17223 counts[ix] = cfg.u ();
17224
17225 if (dump ())
17226 {
17227 dump ("Declaration sections are [%u,%u)",
17228 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17229 dump ("Bindings %u", counts[MSC_bindings]);
17230 dump ("Pendings %u", counts[MSC_pendings]);
17231 dump ("Entities %u", counts[MSC_entities]);
17232 dump ("Namespaces %u", counts[MSC_namespaces]);
17233 dump ("Macros %u", counts[MSC_macros]);
17234 dump ("Initializers %u", counts[MSC_inits]);
17235 }
17236
17237 return cfg.end (from ());
17238 }
17239
17240 /* Tool configuration: MOD_SNAME_PFX .config
17241
17242 This is data that confirms current state (or fails). */
17243
17244 void
17245 module_state::write_config (elf_out *to, module_state_config &config,
17246 unsigned inner_crc)
17247 {
17248 bytes_out cfg (to);
17249
17250 cfg.begin ();
17251
17252 /* Write version and inner crc as u32 values, for easier
17253 debug inspection. */
17254 dump () && dump ("Writing version=%V, inner_crc=%x",
17255 MODULE_VERSION, inner_crc);
17256 cfg.u32 (unsigned (MODULE_VERSION));
17257 cfg.u32 (inner_crc);
17258
17259 cfg.u (to->name (is_header () ? "" : get_flatname ()));
17260
17261 /* Configuration. */
17262 dump () && dump ("Writing target='%s', host='%s'",
17263 TARGET_MACHINE, HOST_MACHINE);
17264 unsigned target = to->name (TARGET_MACHINE);
17265 unsigned host = (!strcmp (TARGET_MACHINE, HOST_MACHINE)
17266 ? target : to->name (HOST_MACHINE));
17267 cfg.u (target);
17268 cfg.u (host);
17269
17270 cfg.str (config.dialect_str);
17271 cfg.u (extensions);
17272
17273 /* Global tree information. We write the globals crc separately,
17274 rather than mix it directly into the overall crc, as it is used
17275 to ensure data match between instances of the compiler, not
17276 integrity of the file. */
17277 dump () && dump ("Writing globals=%u, crc=%x",
17278 fixed_trees->length (), global_crc);
17279 cfg.u (fixed_trees->length ());
17280 cfg.u32 (global_crc);
17281
17282 if (is_partition ())
17283 cfg.u (is_interface ());
17284
17285 cfg.u (config.num_imports);
17286 cfg.u (config.num_partitions);
17287
17288 cfg.u (config.ordinary_locs);
17289 cfg.u (config.macro_locs);
17290 cfg.u (config.ordinary_loc_align);
17291
17292 /* Now generate CRC, we'll have incorporated the inner CRC because
17293 of its serialization above. */
17294 cfg.end (to, to->name (MOD_SNAME_PFX ".cfg"), &crc);
17295 dump () && dump ("Writing CRC=%x", crc);
17296 }
17297
17298 void
17299 module_state::note_cmi_name ()
17300 {
17301 if (!cmi_noted_p && filename)
17302 {
17303 cmi_noted_p = true;
17304 inform (loc, "compiled module file is %qs",
17305 maybe_add_cmi_prefix (filename));
17306 }
17307 }
17308
17309 bool
17310 module_state::read_config (module_state_config &config)
17311 {
17312 bytes_in cfg;
17313
17314 if (!cfg.begin (loc, from (), MOD_SNAME_PFX ".cfg"))
17315 return false;
17316
17317 /* Check version. */
17318 unsigned my_ver = MODULE_VERSION;
17319 unsigned their_ver = cfg.u32 ();
17320 dump () && dump (my_ver == their_ver ? "Version %V"
17321 : "Expecting %V found %V", my_ver, their_ver);
17322 if (their_ver != my_ver)
17323 {
17324 /* The compiler versions differ. Close enough? */
17325 verstr_t my_string, their_string;
17326
17327 version2string (my_ver, my_string);
17328 version2string (their_ver, their_string);
17329
17330 /* Reject when either is non-experimental or when experimental
17331 major versions differ. */
17332 bool reject_p = ((!IS_EXPERIMENTAL (my_ver)
17333 || !IS_EXPERIMENTAL (their_ver)
17334 || MODULE_MAJOR (my_ver) != MODULE_MAJOR (their_ver))
17335 /* The 'I know what I'm doing' switch. */
17336 && !flag_module_version_ignore);
17337 bool inform_p = true;
17338 if (reject_p)
17339 {
17340 cfg.set_overrun ();
17341 error_at (loc, "compiled module is %sversion %s",
17342 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
17343 their_string);
17344 }
17345 else
17346 inform_p = warning_at (loc, 0, "compiled module is %sversion %s",
17347 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
17348 their_string);
17349
17350 if (inform_p)
17351 {
17352 inform (loc, "compiler is %sversion %s%s%s",
17353 IS_EXPERIMENTAL (my_ver) ? "experimental " : "",
17354 my_string,
17355 reject_p ? "" : flag_module_version_ignore
17356 ? ", be it on your own head!" : ", close enough?",
17357 reject_p ? "" : " \xc2\xaf\\_(\xe3\x83\x84)_/\xc2\xaf");
17358 note_cmi_name ();
17359 }
17360
17361 if (reject_p)
17362 goto done;
17363 }
17364
17365 /* We wrote the inner crc merely to merge it, so simply read it
17366 back and forget it. */
17367 cfg.u32 ();
17368
17369 /* Check module name. */
17370 {
17371 const char *their_name = from ()->name (cfg.u ());
17372 const char *our_name = "";
17373
17374 if (!is_header ())
17375 our_name = get_flatname ();
17376
17377 /* Header units can be aliased, so name checking is
17378 inappropriate. */
17379 if (0 != strcmp (their_name, our_name))
17380 {
17381 error_at (loc,
17382 their_name[0] && our_name[0] ? G_("module %qs found")
17383 : their_name[0]
17384 ? G_("header module expected, module %qs found")
17385 : G_("module %qs expected, header module found"),
17386 their_name[0] ? their_name : our_name);
17387 cfg.set_overrun ();
17388 goto done;
17389 }
17390 }
17391
17392 /* Check the CRC after the above sanity checks, so that the user is
17393 clued in. */
17394 {
17395 unsigned e_crc = crc;
17396 crc = cfg.get_crc ();
17397 dump () && dump ("Reading CRC=%x", crc);
17398 if (!is_direct () && crc != e_crc)
17399 {
17400 error_at (loc, "module %qs CRC mismatch", get_flatname ());
17401 cfg.set_overrun ();
17402 goto done;
17403 }
17404 }
17405
17406 /* Check target & host. */
17407 {
17408 const char *their_target = from ()->name (cfg.u ());
17409 const char *their_host = from ()->name (cfg.u ());
17410 dump () && dump ("Read target='%s', host='%s'", their_target, their_host);
17411 if (strcmp (their_target, TARGET_MACHINE)
17412 || strcmp (their_host, HOST_MACHINE))
17413 {
17414 error_at (loc, "target & host is %qs:%qs, expected %qs:%qs",
17415 their_target, TARGET_MACHINE, their_host, HOST_MACHINE);
17416 cfg.set_overrun ();
17417 goto done;
17418 }
17419 }
17420
17421 /* Check compilation dialect. This must match. */
17422 {
17423 const char *their_dialect = cfg.str ();
17424 if (strcmp (their_dialect, config.dialect_str))
17425 {
17426 error_at (loc, "language dialect differs %qs, expected %qs",
17427 their_dialect, config.dialect_str);
17428 cfg.set_overrun ();
17429 goto done;
17430 }
17431 }
17432
17433 /* Check for extensions. If they set any, we must have them set
17434 too. */
17435 {
17436 unsigned ext = cfg.u ();
17437 unsigned allowed = (flag_openmp ? SE_OPENMP : 0);
17438
17439 if (unsigned bad = ext & ~allowed)
17440 {
17441 if (bad & SE_OPENMP)
17442 error_at (loc, "module contains OpenMP, use %<-fopenmp%> to enable");
17443 cfg.set_overrun ();
17444 goto done;
17445 }
17446 extensions = ext;
17447 }
17448
17449 /* Check global trees. */
17450 {
17451 unsigned their_fixed_length = cfg.u ();
17452 unsigned their_fixed_crc = cfg.u32 ();
17453 dump () && dump ("Read globals=%u, crc=%x",
17454 their_fixed_length, their_fixed_crc);
17455 if (!flag_preprocess_only
17456 && (their_fixed_length != fixed_trees->length ()
17457 || their_fixed_crc != global_crc))
17458 {
17459 error_at (loc, "fixed tree mismatch");
17460 cfg.set_overrun ();
17461 goto done;
17462 }
17463 }
17464
17465 /* All non-partitions are interfaces. */
17466 interface_p = !is_partition () || cfg.u ();
17467
17468 config.num_imports = cfg.u ();
17469 config.num_partitions = cfg.u ();
17470
17471 config.ordinary_locs = cfg.u ();
17472 config.macro_locs = cfg.u ();
17473 config.ordinary_loc_align = cfg.u ();
17474
17475 done:
17476 return cfg.end (from ());
17477 }
17478
17479 /* Use ELROND format to record the following sections:
17480 qualified-names : binding value(s)
17481 MOD_SNAME_PFX.README : human readable, strings
17482 MOD_SNAME_PFX.ENV : environment strings, strings
17483 MOD_SNAME_PFX.nms : namespace hierarchy
17484 MOD_SNAME_PFX.bnd : binding table
17485 MOD_SNAME_PFX.spc : specialization table
17486 MOD_SNAME_PFX.imp : import table
17487 MOD_SNAME_PFX.ent : entity table
17488 MOD_SNAME_PFX.prt : partitions table
17489 MOD_SNAME_PFX.olm : ordinary line maps
17490 MOD_SNAME_PFX.mlm : macro line maps
17491 MOD_SNAME_PFX.def : macro definitions
17492 MOD_SNAME_PFX.mac : macro index
17493 MOD_SNAME_PFX.ini : inits
17494 MOD_SNAME_PFX.cnt : counts
17495 MOD_SNAME_PFX.cfg : config data
17496 */
17497
17498 void
17499 module_state::write (elf_out *to, cpp_reader *reader)
17500 {
17501 /* Figure out remapped module numbers, which might elide
17502 partitions. */
17503 bitmap partitions = NULL;
17504 if (!is_header () && !is_partition ())
17505 partitions = BITMAP_GGC_ALLOC ();
17506
17507 unsigned mod_hwm = 1;
17508 for (unsigned ix = 1; ix != modules->length (); ix++)
17509 {
17510 module_state *imp = (*modules)[ix];
17511
17512 /* Promote any non-partition direct import from a partition, unless
17513 we're a partition. */
17514 if (!is_partition () && !imp->is_partition ()
17515 && imp->is_partition_direct ())
17516 imp->directness = MD_PURVIEW_DIRECT;
17517
17518 /* Write any import that is not a partition, unless we're a
17519 partition. */
17520 if (!partitions || !imp->is_partition ())
17521 imp->remap = mod_hwm++;
17522 else
17523 {
17524 dump () && dump ("Partition %M %u", imp, ix);
17525 bitmap_set_bit (partitions, ix);
17526 imp->remap = 0;
17527 /* All interface partitions must be exported. */
17528 if (imp->is_interface () && !bitmap_bit_p (exports, imp->mod))
17529 {
17530 error_at (imp->loc, "interface partition is not exported");
17531 bitmap_set_bit (exports, imp->mod);
17532 }
17533
17534 /* All the partition entities should have been loaded when
17535 loading the partition. */
17536 if (CHECKING_P)
17537 for (unsigned jx = 0; jx != imp->entity_num; jx++)
17538 {
17539 binding_slot *slot = &(*entity_ary)[imp->entity_lwm + jx];
17540 gcc_checking_assert (!slot->is_lazy ());
17541 }
17542 }
17543 }
17544
17545 if (partitions && bitmap_empty_p (partitions))
17546 /* No partitions present. */
17547 partitions = nullptr;
17548
17549 /* Find the set of decls we must write out. */
17550 depset::hash table (DECL_NAMESPACE_BINDINGS (global_namespace)->size () * 8);
17551 /* Add the specializations before the writables, so that we can
17552 detect injected friend specializations. */
17553 table.add_specializations (true);
17554 table.add_specializations (false);
17555 if (partial_specializations)
17556 {
17557 table.add_partial_entities (partial_specializations);
17558 partial_specializations = NULL;
17559 }
17560 table.add_namespace_entities (global_namespace, partitions);
17561 if (class_members)
17562 {
17563 table.add_class_entities (class_members);
17564 class_members = NULL;
17565 }
17566
17567 /* Now join everything up. */
17568 table.find_dependencies ();
17569
17570 if (!table.finalize_dependencies ())
17571 {
17572 to->set_error ();
17573 return;
17574 }
17575
17576 #if CHECKING_P
17577 /* We're done verifying at-most once reading, reset to verify
17578 at-most once writing. */
17579 note_defs = note_defs_table_t::create_ggc (1000);
17580 #endif
17581
17582 /* Determine Strongy Connected Components. */
17583 vec<depset *> sccs = table.connect ();
17584
17585 unsigned crc = 0;
17586 module_state_config config;
17587 location_map_info map_info = write_prepare_maps (&config);
17588 unsigned counts[MSC_HWM];
17589
17590 config.num_imports = mod_hwm;
17591 config.num_partitions = modules->length () - mod_hwm;
17592 memset (counts, 0, sizeof (counts));
17593
17594 /* depset::cluster is the cluster number,
17595 depset::section is unspecified scratch value.
17596
17597 The following loops make use of the tarjan property that
17598 dependencies will be earlier in the SCCS array. */
17599
17600 /* This first loop determines the number of depsets in each SCC, and
17601 also the number of namespaces we're dealing with. During the
17602 loop, the meaning of a couple of depset fields now change:
17603
17604 depset::cluster -> size_of cluster, if first of cluster & !namespace
17605 depset::section -> section number of cluster (if !namespace). */
17606
17607 unsigned n_spaces = 0;
17608 counts[MSC_sec_lwm] = counts[MSC_sec_hwm] = to->get_section_limit ();
17609 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
17610 {
17611 depset **base = &sccs[ix];
17612
17613 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
17614 {
17615 n_spaces++;
17616 size = 1;
17617 }
17618 else
17619 {
17620 /* Count the members in this cluster. */
17621 for (size = 1; ix + size < sccs.length (); size++)
17622 if (base[size]->cluster != base[0]->cluster)
17623 break;
17624
17625 for (unsigned jx = 0; jx != size; jx++)
17626 {
17627 /* Set the section number. */
17628 base[jx]->cluster = ~(~0u >> 1); /* A bad value. */
17629 base[jx]->section = counts[MSC_sec_hwm];
17630 }
17631
17632 /* Save the size in the first member's cluster slot. */
17633 base[0]->cluster = size;
17634
17635 counts[MSC_sec_hwm]++;
17636 }
17637 }
17638
17639 /* Write the clusters. Namespace decls are put in the spaces array.
17640 The meaning of depset::cluster changes to provide the
17641 unnamed-decl count of the depset's decl (and remains zero for
17642 non-decls and non-unnamed). */
17643 unsigned bytes = 0;
17644 vec<depset *> spaces;
17645 spaces.create (n_spaces);
17646
17647 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
17648 {
17649 depset **base = &sccs[ix];
17650
17651 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
17652 {
17653 tree decl = base[0]->get_entity ();
17654 if (decl == global_namespace)
17655 base[0]->cluster = 0;
17656 else if (!base[0]->is_import ())
17657 {
17658 base[0]->cluster = counts[MSC_entities]++;
17659 spaces.quick_push (base[0]);
17660 counts[MSC_namespaces]++;
17661 if (CHECKING_P)
17662 {
17663 /* Add it to the entity map, such that we can tell it is
17664 part of us. */
17665 bool existed;
17666 unsigned *slot = &entity_map->get_or_insert
17667 (DECL_UID (decl), &existed);
17668 if (existed)
17669 /* It must have come from a partition. */
17670 gcc_checking_assert
17671 (import_entity_module (*slot)->is_partition ());
17672 *slot = ~base[0]->cluster;
17673 }
17674 dump (dumper::CLUSTER) && dump ("Cluster namespace %N", decl);
17675 }
17676 size = 1;
17677 }
17678 else
17679 {
17680 size = base[0]->cluster;
17681
17682 /* Cluster is now used to number entities. */
17683 base[0]->cluster = ~(~0u >> 1); /* A bad value. */
17684
17685 sort_cluster (&table, base, size);
17686
17687 /* Record the section for consistency checking during stream
17688 out -- we don't want to start writing decls in different
17689 sections. */
17690 table.section = base[0]->section;
17691 bytes += write_cluster (to, base, size, table, counts, &crc);
17692 table.section = 0;
17693 }
17694 }
17695
17696 /* We'd better have written as many sections and found as many
17697 namespaces as we predicted. */
17698 gcc_assert (counts[MSC_sec_hwm] == to->get_section_limit ()
17699 && spaces.length () == counts[MSC_namespaces]);
17700
17701 /* Write the entitites. None happens if we contain namespaces or
17702 nothing. */
17703 if (counts[MSC_entities])
17704 write_entities (to, sccs, counts[MSC_entities], &crc);
17705
17706 /* Write the namespaces. */
17707 if (counts[MSC_namespaces])
17708 write_namespaces (to, spaces, counts[MSC_namespaces], &crc);
17709
17710 /* Write the bindings themselves. */
17711 counts[MSC_bindings] = write_bindings (to, sccs, &crc);
17712
17713 /* Write the unnamed. */
17714 if (counts[MSC_pendings])
17715 write_pendings (to, sccs, table, counts[MSC_pendings], &crc);
17716
17717 /* Write the import table. */
17718 if (config.num_imports > 1)
17719 write_imports (to, &crc);
17720
17721 /* Write elided partition table. */
17722 if (config.num_partitions)
17723 write_partitions (to, config.num_partitions, &crc);
17724
17725 /* Write the line maps. */
17726 write_ordinary_maps (to, map_info, &config, config.num_partitions, &crc);
17727 write_macro_maps (to, map_info, &config, &crc);
17728
17729 if (is_header ())
17730 {
17731 counts[MSC_macros] = write_macros (to, reader, &crc);
17732 counts[MSC_inits] = write_inits (to, table, &crc);
17733 }
17734
17735 unsigned clusters = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
17736 dump () && dump ("Wrote %u clusters, average %u bytes/cluster",
17737 clusters, (bytes + clusters / 2) / (clusters + !clusters));
17738
17739 write_counts (to, counts, &crc);
17740
17741 /* And finish up. */
17742 write_config (to, config, crc);
17743
17744 spaces.release ();
17745 sccs.release ();
17746
17747 /* Human-readable info. */
17748 write_readme (to, reader, config.dialect_str, extensions);
17749
17750 // FIXME:QOI: Have a command line switch to control more detailed
17751 // information (which might leak data you do not want to leak).
17752 // Perhaps (some of) the write_readme contents should also be
17753 // so-controlled.
17754 if (false)
17755 write_env (to);
17756
17757 trees_out::instrument ();
17758 dump () && dump ("Wrote %u sections", to->get_section_limit ());
17759 }
17760
17761 /* Initial read of a CMI. Checks config, loads up imports and line
17762 maps. */
17763
17764 bool
17765 module_state::read_initial (cpp_reader *reader)
17766 {
17767 module_state_config config;
17768 bool ok = true;
17769
17770 if (ok && !from ()->begin (loc))
17771 ok = false;
17772
17773 if (ok && !read_config (config))
17774 ok = false;
17775
17776 bool have_locs = ok && read_prepare_maps (&config);
17777
17778 /* Ordinary maps before the imports. */
17779 if (have_locs && !read_ordinary_maps ())
17780 ok = false;
17781
17782 /* Allocate the REMAP vector. */
17783 slurp->alloc_remap (config.num_imports);
17784
17785 if (ok)
17786 {
17787 /* Read the import table. Decrement current to stop this CMI
17788 from being evicted during the import. */
17789 slurp->current--;
17790 if (config.num_imports > 1 && !read_imports (reader, line_table))
17791 ok = false;
17792 slurp->current++;
17793 }
17794
17795 /* Read the elided partition table, if we're the primary partition. */
17796 if (ok && config.num_partitions && is_module ()
17797 && !read_partitions (config.num_partitions))
17798 ok = false;
17799
17800 /* Determine the module's number. */
17801 gcc_checking_assert (mod == MODULE_UNKNOWN);
17802 gcc_checking_assert (this != (*modules)[0]);
17803
17804 /* We'll run out of other resources before we run out of module
17805 indices. */
17806 mod = modules->length ();
17807 vec_safe_push (modules, this);
17808
17809 /* We always import and export ourselves. */
17810 bitmap_set_bit (imports, mod);
17811 bitmap_set_bit (exports, mod);
17812
17813 if (ok)
17814 (*slurp->remap)[0] = mod << 1;
17815 dump () && dump ("Assigning %M module number %u", this, mod);
17816
17817 /* We should not have been frozen during the importing done by
17818 read_config. */
17819 gcc_assert (!from ()->is_frozen ());
17820
17821 /* Macro maps after the imports. */
17822 if (ok && have_locs && !read_macro_maps ())
17823 ok = false;
17824
17825 gcc_assert (slurp->current == ~0u);
17826 return ok;
17827 }
17828
17829 /* Read a preprocessor state. */
17830
17831 bool
17832 module_state::read_preprocessor (bool outermost)
17833 {
17834 gcc_checking_assert (is_header () && slurp
17835 && slurp->remap_module (0) == mod);
17836
17837 if (loadedness == ML_PREPROCESSOR)
17838 return !(from () && from ()->get_error ());
17839
17840 bool ok = true;
17841
17842 /* Read direct header imports. */
17843 unsigned len = slurp->remap->length ();
17844 for (unsigned ix = 1; ok && ix != len; ix++)
17845 {
17846 unsigned map = (*slurp->remap)[ix];
17847 if (map & 1)
17848 {
17849 module_state *import = (*modules)[map >> 1];
17850 if (import->is_header ())
17851 {
17852 ok = import->read_preprocessor (false);
17853 bitmap_ior_into (slurp->headers, import->slurp->headers);
17854 }
17855 }
17856 }
17857
17858 /* Record as a direct header. */
17859 if (ok)
17860 bitmap_set_bit (slurp->headers, mod);
17861
17862 if (ok && !read_macros ())
17863 ok = false;
17864
17865 loadedness = ML_PREPROCESSOR;
17866 announce ("macros");
17867
17868 if (flag_preprocess_only)
17869 /* We're done with the string table. */
17870 from ()->release ();
17871
17872 return check_read (outermost, ok);
17873 }
17874
17875 static unsigned lazy_snum;
17876
17877 static bool
17878 recursive_lazy (unsigned snum = ~0u)
17879 {
17880 if (lazy_snum)
17881 {
17882 error_at (input_location, "recursive lazy load");
17883 return true;
17884 }
17885
17886 lazy_snum = snum;
17887 return false;
17888 }
17889
17890 /* Read language state. */
17891
17892 bool
17893 module_state::read_language (bool outermost)
17894 {
17895 gcc_checking_assert (!lazy_snum);
17896
17897 if (loadedness == ML_LANGUAGE)
17898 return !(slurp && from () && from ()->get_error ());
17899
17900 gcc_checking_assert (slurp && slurp->current == ~0u
17901 && slurp->remap_module (0) == mod);
17902
17903 bool ok = true;
17904
17905 /* Read direct imports. */
17906 unsigned len = slurp->remap->length ();
17907 for (unsigned ix = 1; ok && ix != len; ix++)
17908 {
17909 unsigned map = (*slurp->remap)[ix];
17910 if (map & 1)
17911 {
17912 module_state *import = (*modules)[map >> 1];
17913 if (!import->read_language (false))
17914 ok = false;
17915 }
17916 }
17917
17918 unsigned counts[MSC_HWM];
17919
17920 if (ok && !read_counts (counts))
17921 ok = false;
17922
17923 function_depth++; /* Prevent unexpected GCs. */
17924
17925 /* Read the entity table. */
17926 entity_lwm = vec_safe_length (entity_ary);
17927 if (ok && counts[MSC_entities]
17928 && !read_entities (counts[MSC_entities],
17929 counts[MSC_sec_lwm], counts[MSC_sec_hwm]))
17930 ok = false;
17931
17932 /* Read the namespace hierarchy. */
17933 if (ok && counts[MSC_namespaces]
17934 && !read_namespaces (counts[MSC_namespaces]))
17935 ok = false;
17936
17937 if (ok && !read_bindings (counts[MSC_bindings],
17938 counts[MSC_sec_lwm], counts[MSC_sec_hwm]))
17939 ok = false;
17940
17941 /* And unnamed. */
17942 if (ok && counts[MSC_pendings] && !read_pendings (counts[MSC_pendings]))
17943 ok = false;
17944
17945 if (ok)
17946 {
17947 slurp->remaining = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
17948 available_clusters += counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
17949 }
17950
17951 if (!flag_module_lazy
17952 || (is_partition ()
17953 && module_interface_p ()
17954 && !module_partition_p ()))
17955 {
17956 /* Read the sections in forward order, so that dependencies are read
17957 first. See note about tarjan_connect. */
17958 ggc_collect ();
17959
17960 lazy_snum = ~0u;
17961
17962 unsigned hwm = counts[MSC_sec_hwm];
17963 for (unsigned ix = counts[MSC_sec_lwm]; ok && ix != hwm; ix++)
17964 {
17965 if (!load_section (ix, NULL))
17966 {
17967 ok = false;
17968 break;
17969 }
17970 ggc_collect ();
17971 }
17972
17973 lazy_snum = 0;
17974
17975 if (ok && CHECKING_P)
17976 for (unsigned ix = 0; ix != entity_num; ix++)
17977 gcc_assert (!(*entity_ary)[ix + entity_lwm].is_lazy ());
17978 }
17979
17980 // If the import is a header-unit, we need to register initializers
17981 // of any static objects it contains (looking at you _Ioinit).
17982 // Notice, the ordering of these initializers will be that of a
17983 // dynamic initializer at this point in the current TU. (Other
17984 // instances of these objects in other TUs will be initialized as
17985 // part of that TU's global initializers.)
17986 if (ok && counts[MSC_inits] && !read_inits (counts[MSC_inits]))
17987 ok = false;
17988
17989 function_depth--;
17990
17991 announce (flag_module_lazy ? "lazy" : "imported");
17992 loadedness = ML_LANGUAGE;
17993
17994 gcc_assert (slurp->current == ~0u);
17995
17996 /* We're done with the string table. */
17997 from ()->release ();
17998
17999 return check_read (outermost, ok);
18000 }
18001
18002 bool
18003 module_state::maybe_defrost ()
18004 {
18005 bool ok = true;
18006 if (from ()->is_frozen ())
18007 {
18008 if (lazy_open >= lazy_limit)
18009 freeze_an_elf ();
18010 dump () && dump ("Defrosting '%s'", filename);
18011 ok = from ()->defrost (maybe_add_cmi_prefix (filename));
18012 lazy_open++;
18013 }
18014
18015 return ok;
18016 }
18017
18018 /* Load section SNUM, dealing with laziness. It doesn't matter if we
18019 have multiple concurrent loads, because we do not use TREE_VISITED
18020 when reading back in. */
18021
18022 bool
18023 module_state::load_section (unsigned snum, binding_slot *mslot)
18024 {
18025 if (from ()->get_error ())
18026 return false;
18027
18028 if (snum >= slurp->current)
18029 from ()->set_error (elf::E_BAD_LAZY);
18030 else if (maybe_defrost ())
18031 {
18032 unsigned old_current = slurp->current;
18033 slurp->current = snum;
18034 slurp->lru = 0; /* Do not swap out. */
18035 slurp->remaining--;
18036 read_cluster (snum);
18037 slurp->lru = ++lazy_lru;
18038 slurp->current = old_current;
18039 }
18040
18041 if (mslot && mslot->is_lazy ())
18042 {
18043 /* Oops, the section didn't set this slot. */
18044 from ()->set_error (elf::E_BAD_DATA);
18045 *mslot = NULL_TREE;
18046 }
18047
18048 bool ok = !from ()->get_error ();
18049 if (!ok)
18050 {
18051 error_at (loc, "failed to read compiled module cluster %u: %s",
18052 snum, from ()->get_error (filename));
18053 note_cmi_name ();
18054 }
18055
18056 maybe_completed_reading ();
18057
18058 return ok;
18059 }
18060
18061 void
18062 module_state::maybe_completed_reading ()
18063 {
18064 if (loadedness == ML_LANGUAGE && slurp->current == ~0u && !slurp->remaining)
18065 {
18066 lazy_open--;
18067 /* We no longer need the macros, all tokenizing has been done. */
18068 slurp->release_macros ();
18069
18070 from ()->end ();
18071 slurp->close ();
18072 slurped ();
18073 }
18074 }
18075
18076 /* After a reading operation, make sure things are still ok. If not,
18077 emit an error and clean up. */
18078
18079 bool
18080 module_state::check_read (bool outermost, bool ok)
18081 {
18082 gcc_checking_assert (!outermost || slurp->current == ~0u);
18083
18084 if (!ok)
18085 from ()->set_error ();
18086
18087 if (int e = from ()->get_error ())
18088 {
18089 error_at (loc, "failed to read compiled module: %s",
18090 from ()->get_error (filename));
18091 note_cmi_name ();
18092
18093 if (e == EMFILE
18094 || e == ENFILE
18095 #if MAPPED_READING
18096 || e == ENOMEM
18097 #endif
18098 || false)
18099 inform (loc, "consider using %<-fno-module-lazy%>,"
18100 " increasing %<-param-lazy-modules=%u%> value,"
18101 " or increasing the per-process file descriptor limit",
18102 param_lazy_modules);
18103 else if (e == ENOENT)
18104 inform (loc, "imports must be built before being imported");
18105
18106 if (outermost)
18107 fatal_error (loc, "returning to the gate for a mechanical issue");
18108
18109 ok = false;
18110 }
18111
18112 maybe_completed_reading ();
18113
18114 return ok;
18115 }
18116
18117 /* Return the IDENTIFIER_NODE naming module IX. This is the name
18118 including dots. */
18119
18120 char const *
18121 module_name (unsigned ix, bool header_ok)
18122 {
18123 if (modules)
18124 {
18125 module_state *imp = (*modules)[ix];
18126
18127 if (ix && !imp->name)
18128 imp = imp->parent;
18129
18130 if (header_ok || !imp->is_header ())
18131 return imp->get_flatname ();
18132 }
18133
18134 return NULL;
18135 }
18136
18137 /* Return the bitmap describing what modules are imported. Remember,
18138 we always import ourselves. */
18139
18140 bitmap
18141 get_import_bitmap ()
18142 {
18143 return (*modules)[0]->imports;
18144 }
18145
18146 /* Return the visible imports and path of instantiation for an
18147 instantiation at TINST. If TINST is nullptr, we're not in an
18148 instantiation, and thus will return the visible imports of the
18149 current TU (and NULL *PATH_MAP_P). We cache the information on
18150 the tinst level itself. */
18151
18152 static bitmap
18153 path_of_instantiation (tinst_level *tinst, bitmap *path_map_p)
18154 {
18155 gcc_checking_assert (modules_p ());
18156
18157 if (!tinst)
18158 {
18159 /* Not inside an instantiation, just the regular case. */
18160 *path_map_p = nullptr;
18161 return get_import_bitmap ();
18162 }
18163
18164 if (!tinst->path)
18165 {
18166 /* Calculate. */
18167 bitmap visible = path_of_instantiation (tinst->next, path_map_p);
18168 bitmap path_map = *path_map_p;
18169
18170 if (!path_map)
18171 {
18172 path_map = BITMAP_GGC_ALLOC ();
18173 bitmap_set_bit (path_map, 0);
18174 }
18175
18176 tree decl = tinst->tldcl;
18177 if (TREE_CODE (decl) == TREE_LIST)
18178 decl = TREE_PURPOSE (decl);
18179 if (TYPE_P (decl))
18180 decl = TYPE_NAME (decl);
18181
18182 if (unsigned mod = get_originating_module (decl))
18183 if (!bitmap_bit_p (path_map, mod))
18184 {
18185 /* This is brand new information! */
18186 bitmap new_path = BITMAP_GGC_ALLOC ();
18187 bitmap_copy (new_path, path_map);
18188 bitmap_set_bit (new_path, mod);
18189 path_map = new_path;
18190
18191 bitmap imports = (*modules)[mod]->imports;
18192 if (bitmap_intersect_compl_p (imports, visible))
18193 {
18194 /* IMPORTS contains additional modules to VISIBLE. */
18195 bitmap new_visible = BITMAP_GGC_ALLOC ();
18196
18197 bitmap_ior (new_visible, visible, imports);
18198 visible = new_visible;
18199 }
18200 }
18201
18202 tinst->path = path_map;
18203 tinst->visible = visible;
18204 }
18205
18206 *path_map_p = tinst->path;
18207 return tinst->visible;
18208 }
18209
18210 /* Return the bitmap describing what modules are visible along the
18211 path of instantiation. If we're not an instantiation, this will be
18212 the visible imports of the TU. *PATH_MAP_P is filled in with the
18213 modules owning the instantiation path -- we see the module-linkage
18214 entities of those modules. */
18215
18216 bitmap
18217 visible_instantiation_path (bitmap *path_map_p)
18218 {
18219 if (!modules_p ())
18220 return NULL;
18221
18222 return path_of_instantiation (current_instantiation (), path_map_p);
18223 }
18224
18225 /* We've just directly imported IMPORT. Update our import/export
18226 bitmaps. IS_EXPORT is true if we're reexporting the OTHER. */
18227
18228 void
18229 module_state::set_import (module_state const *import, bool is_export)
18230 {
18231 gcc_checking_assert (this != import);
18232
18233 /* We see IMPORT's exports (which includes IMPORT). If IMPORT is
18234 the primary interface or a partition we'll see its imports. */
18235 bitmap_ior_into (imports, import->is_module () || import->is_partition ()
18236 ? import->imports : import->exports);
18237
18238 if (is_export)
18239 /* We'll export OTHER's exports. */
18240 bitmap_ior_into (exports, import->exports);
18241 }
18242
18243 /* Return the declaring entity of DECL. That is the decl determining
18244 how to decorate DECL with module information. Returns NULL_TREE if
18245 it's the global module. */
18246
18247 tree
18248 get_originating_module_decl (tree decl)
18249 {
18250 /* An enumeration constant. */
18251 if (TREE_CODE (decl) == CONST_DECL
18252 && DECL_CONTEXT (decl)
18253 && (TREE_CODE (DECL_CONTEXT (decl)) == ENUMERAL_TYPE))
18254 decl = TYPE_NAME (DECL_CONTEXT (decl));
18255 else if (TREE_CODE (decl) == FIELD_DECL
18256 || TREE_CODE (decl) == USING_DECL)
18257 {
18258 decl = DECL_CONTEXT (decl);
18259 if (TREE_CODE (decl) != FUNCTION_DECL)
18260 decl = TYPE_NAME (decl);
18261 }
18262
18263 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
18264 || TREE_CODE (decl) == FUNCTION_DECL
18265 || TREE_CODE (decl) == TYPE_DECL
18266 || TREE_CODE (decl) == VAR_DECL
18267 || TREE_CODE (decl) == CONCEPT_DECL
18268 || TREE_CODE (decl) == NAMESPACE_DECL);
18269
18270 for (;;)
18271 {
18272 /* Uninstantiated template friends are owned by the befriending
18273 class -- not their context. */
18274 if (TREE_CODE (decl) == TEMPLATE_DECL
18275 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
18276 decl = TYPE_NAME (DECL_CHAIN (decl));
18277
18278 int use;
18279 if (tree ti = node_template_info (decl, use))
18280 {
18281 decl = TI_TEMPLATE (ti);
18282 if (TREE_CODE (decl) != TEMPLATE_DECL)
18283 {
18284 /* A friend template specialization. */
18285 gcc_checking_assert (OVL_P (decl));
18286 return global_namespace;
18287 }
18288 }
18289 else
18290 {
18291 tree ctx = CP_DECL_CONTEXT (decl);
18292 if (TREE_CODE (ctx) == NAMESPACE_DECL)
18293 break;
18294
18295 if (TYPE_P (ctx))
18296 {
18297 ctx = TYPE_NAME (ctx);
18298 if (!ctx)
18299 {
18300 /* Some kind of internal type. */
18301 gcc_checking_assert (DECL_ARTIFICIAL (decl));
18302 return global_namespace;
18303 }
18304 }
18305 decl = ctx;
18306 }
18307 }
18308
18309 return decl;
18310 }
18311
18312 int
18313 get_originating_module (tree decl, bool for_mangle)
18314 {
18315 tree owner = get_originating_module_decl (decl);
18316
18317 if (!DECL_LANG_SPECIFIC (owner))
18318 return for_mangle ? -1 : 0;
18319
18320 if (for_mangle
18321 && (DECL_MODULE_EXPORT_P (owner) || !DECL_MODULE_PURVIEW_P (owner)))
18322 return -1;
18323
18324 if (!DECL_MODULE_IMPORT_P (owner))
18325 return 0;
18326
18327 return get_importing_module (owner);
18328 }
18329
18330 unsigned
18331 get_importing_module (tree decl, bool flexible)
18332 {
18333 unsigned index = import_entity_index (decl, flexible);
18334 if (index == ~(~0u >> 1))
18335 return -1;
18336 module_state *module = import_entity_module (index);
18337
18338 return module->mod;
18339 }
18340
18341 /* Is it permissible to redeclare DECL. */
18342
18343 bool
18344 module_may_redeclare (tree decl)
18345 {
18346 module_state *me = (*modules)[0];
18347 module_state *them = me;
18348 if (DECL_LANG_SPECIFIC (decl) && DECL_MODULE_IMPORT_P (decl))
18349 {
18350 /* We can be given the TEMPLATE_RESULT. We want the
18351 TEMPLATE_DECL. */
18352 int use_tpl = -1;
18353 if (tree ti = node_template_info (decl, use_tpl))
18354 {
18355 tree tmpl = TI_TEMPLATE (ti);
18356 if (DECL_TEMPLATE_RESULT (tmpl) == decl)
18357 decl = tmpl;
18358 // FIXME: What about partial specializations? We need to
18359 // look at the specialization list in that case. Unless our
18360 // caller's given us the right thing. An alternative would
18361 // be to put both the template and the result into the
18362 // entity hash, but that seems expensive?
18363 }
18364 unsigned index = import_entity_index (decl);
18365 them = import_entity_module (index);
18366 }
18367
18368 if (them->is_header ())
18369 {
18370 if (!header_module_p ())
18371 return !module_purview_p ();
18372
18373 if (DECL_SOURCE_LOCATION (decl) == BUILTINS_LOCATION)
18374 /* This is a builtin, being declared in header-unit. We
18375 now need to mark it as an export. */
18376 DECL_MODULE_EXPORT_P (decl) = true;
18377
18378 /* If it came from a header, it's in the global module. */
18379 return true;
18380 }
18381
18382 if (me == them)
18383 return ((DECL_LANG_SPECIFIC (decl) && DECL_MODULE_PURVIEW_P (decl))
18384 == module_purview_p ());
18385
18386 if (!me->name)
18387 me = me->parent;
18388
18389 /* We can't have found a GMF entity from a named module. */
18390 gcc_checking_assert (DECL_LANG_SPECIFIC (decl)
18391 && DECL_MODULE_PURVIEW_P (decl));
18392
18393 return me && get_primary (them) == get_primary (me);
18394 }
18395
18396 /* DECL is being created by this TU. Record it came from here. We
18397 record module purview, so we can see if partial or explicit
18398 specialization needs to be written out, even though its purviewness
18399 comes from the most general template. */
18400
18401 void
18402 set_instantiating_module (tree decl)
18403 {
18404 gcc_assert (TREE_CODE (decl) == FUNCTION_DECL
18405 || TREE_CODE (decl) == VAR_DECL
18406 || TREE_CODE (decl) == TYPE_DECL
18407 || TREE_CODE (decl) == CONCEPT_DECL
18408 || TREE_CODE (decl) == TEMPLATE_DECL
18409 || (TREE_CODE (decl) == NAMESPACE_DECL
18410 && DECL_NAMESPACE_ALIAS (decl)));
18411
18412 if (!modules_p ())
18413 return;
18414
18415 if (!DECL_LANG_SPECIFIC (decl) && module_purview_p ())
18416 retrofit_lang_decl (decl);
18417 if (DECL_LANG_SPECIFIC (decl))
18418 {
18419 DECL_MODULE_PURVIEW_P (decl) = module_purview_p ();
18420 /* If this was imported, we'll still be in the entity_hash. */
18421 DECL_MODULE_IMPORT_P (decl) = false;
18422 if (TREE_CODE (decl) == TEMPLATE_DECL)
18423 {
18424 tree res = DECL_TEMPLATE_RESULT (decl);
18425 retrofit_lang_decl (res);
18426 DECL_MODULE_PURVIEW_P (res) = DECL_MODULE_PURVIEW_P (decl);
18427 DECL_MODULE_IMPORT_P (res) = false;
18428 }
18429 }
18430 }
18431
18432 /* If DECL is a class member, whose class is not defined in this TU
18433 (it was imported), remember this decl. */
18434
18435 void
18436 set_defining_module (tree decl)
18437 {
18438 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
18439 || !DECL_MODULE_IMPORT_P (decl));
18440
18441 if (module_has_cmi_p ())
18442 {
18443 tree ctx = DECL_CONTEXT (decl);
18444 if (ctx
18445 && (TREE_CODE (ctx) == RECORD_TYPE || TREE_CODE (ctx) == UNION_TYPE)
18446 && DECL_LANG_SPECIFIC (TYPE_NAME (ctx))
18447 && DECL_MODULE_IMPORT_P (TYPE_NAME (ctx)))
18448 {
18449 /* This entity's context is from an import. We may need to
18450 record this entity to make sure we emit it in the CMI.
18451 Template specializations are in the template hash tables,
18452 so we don't need to record them here as well. */
18453 int use_tpl = -1;
18454 tree ti = node_template_info (decl, use_tpl);
18455 if (use_tpl <= 0)
18456 {
18457 if (ti)
18458 {
18459 gcc_checking_assert (!use_tpl);
18460 /* Get to the TEMPLATE_DECL. */
18461 decl = TI_TEMPLATE (ti);
18462 }
18463
18464 /* Record it on the class_members list. */
18465 vec_safe_push (class_members, decl);
18466 }
18467 }
18468 else if (DECL_IMPLICIT_TYPEDEF_P (decl)
18469 && CLASSTYPE_TEMPLATE_SPECIALIZATION (TREE_TYPE (decl)))
18470 /* This is a partial or explicit specialization. */
18471 vec_safe_push (partial_specializations, decl);
18472 }
18473 }
18474
18475 void
18476 set_originating_module (tree decl, bool friend_p ATTRIBUTE_UNUSED)
18477 {
18478 set_instantiating_module (decl);
18479
18480 if (TREE_CODE (CP_DECL_CONTEXT (decl)) != NAMESPACE_DECL)
18481 return;
18482
18483 gcc_checking_assert (friend_p || decl == get_originating_module_decl (decl));
18484
18485 if (!module_exporting_p ())
18486 return;
18487
18488 // FIXME: Check ill-formed linkage
18489 DECL_MODULE_EXPORT_P (decl) = true;
18490 }
18491
18492 /* DECL is attached to ROOT for odr purposes. */
18493
18494 void
18495 maybe_attach_decl (tree ctx, tree decl)
18496 {
18497 if (!modules_p ())
18498 return;
18499
18500 // FIXME: For now just deal with lambdas attached to var decls.
18501 // This might be sufficient?
18502 if (TREE_CODE (ctx) != VAR_DECL)
18503 return;
18504
18505 gcc_checking_assert (DECL_NAMESPACE_SCOPE_P (ctx));
18506
18507 if (!attached_table)
18508 attached_table = new attachset::hash (EXPERIMENT (1, 400));
18509
18510 if (attached_table->add (DECL_UID (ctx), decl))
18511 {
18512 retrofit_lang_decl (ctx);
18513 DECL_MODULE_ATTACHMENTS_P (ctx) = true;
18514 }
18515 }
18516
18517 /* Create the flat name string. It is simplest to have it handy. */
18518
18519 void
18520 module_state::set_flatname ()
18521 {
18522 gcc_checking_assert (!flatname);
18523 if (parent)
18524 {
18525 auto_vec<tree,5> ids;
18526 size_t len = 0;
18527 char const *primary = NULL;
18528 size_t pfx_len = 0;
18529
18530 for (module_state *probe = this;
18531 probe;
18532 probe = probe->parent)
18533 if (is_partition () && !probe->is_partition ())
18534 {
18535 primary = probe->get_flatname ();
18536 pfx_len = strlen (primary);
18537 break;
18538 }
18539 else
18540 {
18541 ids.safe_push (probe->name);
18542 len += IDENTIFIER_LENGTH (probe->name) + 1;
18543 }
18544
18545 char *flat = XNEWVEC (char, pfx_len + len + is_partition ());
18546 flatname = flat;
18547
18548 if (primary)
18549 {
18550 memcpy (flat, primary, pfx_len);
18551 flat += pfx_len;
18552 *flat++ = ':';
18553 }
18554
18555 for (unsigned len = 0; ids.length ();)
18556 {
18557 if (len)
18558 flat[len++] = '.';
18559 tree elt = ids.pop ();
18560 unsigned l = IDENTIFIER_LENGTH (elt);
18561 memcpy (flat + len, IDENTIFIER_POINTER (elt), l + 1);
18562 len += l;
18563 }
18564 }
18565 else if (is_header ())
18566 flatname = TREE_STRING_POINTER (name);
18567 else
18568 flatname = IDENTIFIER_POINTER (name);
18569 }
18570
18571 /* Read the CMI file for a module. */
18572
18573 bool
18574 module_state::do_import (cpp_reader *reader, bool outermost)
18575 {
18576 gcc_assert (global_namespace == current_scope () && loadedness == ML_NONE);
18577
18578 loc = linemap_module_loc (line_table, loc, get_flatname ());
18579
18580 if (lazy_open >= lazy_limit)
18581 freeze_an_elf ();
18582
18583 int fd = -1;
18584 int e = ENOENT;
18585 if (filename)
18586 {
18587 const char *file = maybe_add_cmi_prefix (filename);
18588 dump () && dump ("CMI is %s", file);
18589 fd = open (file, O_RDONLY | O_CLOEXEC | O_BINARY);
18590 e = errno;
18591 }
18592
18593 gcc_checking_assert (!slurp);
18594 slurp = new slurping (new elf_in (fd, e));
18595
18596 bool ok = true;
18597 if (!from ()->get_error ())
18598 {
18599 announce ("importing");
18600 loadedness = ML_CONFIG;
18601 lazy_open++;
18602 ok = read_initial (reader);
18603 slurp->lru = ++lazy_lru;
18604 }
18605
18606 gcc_assert (slurp->current == ~0u);
18607
18608 return check_read (outermost, ok);
18609 }
18610
18611 /* Attempt to increase the file descriptor limit. */
18612
18613 static bool
18614 try_increase_lazy (unsigned want)
18615 {
18616 gcc_checking_assert (lazy_open >= lazy_limit);
18617
18618 /* If we're increasing, saturate at hard limit. */
18619 if (want > lazy_hard_limit && lazy_limit < lazy_hard_limit)
18620 want = lazy_hard_limit;
18621
18622 #if HAVE_SETRLIMIT
18623 if ((!lazy_limit || !param_lazy_modules)
18624 && lazy_hard_limit
18625 && want <= lazy_hard_limit)
18626 {
18627 struct rlimit rlimit;
18628 rlimit.rlim_cur = want + LAZY_HEADROOM;
18629 rlimit.rlim_max = lazy_hard_limit + LAZY_HEADROOM;
18630 if (!setrlimit (RLIMIT_NOFILE, &rlimit))
18631 lazy_limit = want;
18632 }
18633 #endif
18634
18635 return lazy_open < lazy_limit;
18636 }
18637
18638 /* Pick a victim module to freeze its reader. */
18639
18640 void
18641 module_state::freeze_an_elf ()
18642 {
18643 if (try_increase_lazy (lazy_open * 2))
18644 return;
18645
18646 module_state *victim = NULL;
18647 for (unsigned ix = modules->length (); ix--;)
18648 {
18649 module_state *candidate = (*modules)[ix];
18650 if (candidate && candidate->slurp && candidate->slurp->lru
18651 && candidate->from ()->is_freezable ()
18652 && (!victim || victim->slurp->lru > candidate->slurp->lru))
18653 victim = candidate;
18654 }
18655
18656 if (victim)
18657 {
18658 dump () && dump ("Freezing '%s'", victim->filename);
18659 if (victim->slurp->macro_defs.size)
18660 /* Save the macro definitions to a buffer. */
18661 victim->from ()->preserve (victim->slurp->macro_defs);
18662 if (victim->slurp->macro_tbl.size)
18663 /* Save the macro definitions to a buffer. */
18664 victim->from ()->preserve (victim->slurp->macro_tbl);
18665 victim->from ()->freeze ();
18666 lazy_open--;
18667 }
18668 else
18669 dump () && dump ("No module available for freezing");
18670 }
18671
18672 /* Load the lazy slot *MSLOT, INDEX'th slot of the module. */
18673
18674 bool
18675 module_state::lazy_load (unsigned index, binding_slot *mslot)
18676 {
18677 unsigned n = dump.push (this);
18678
18679 gcc_checking_assert (function_depth);
18680
18681 unsigned cookie = mslot->get_lazy ();
18682 unsigned snum = cookie >> 2;
18683 dump () && dump ("Loading entity %M[%u] section:%u", this, index, snum);
18684
18685 bool ok = load_section (snum, mslot);
18686
18687 dump.pop (n);
18688
18689 return ok;
18690 }
18691
18692 /* Load MOD's binding for NS::ID into *MSLOT. *MSLOT contains the
18693 lazy cookie. OUTER is true if this is the outermost lazy, (used
18694 for diagnostics). */
18695
18696 void
18697 lazy_load_binding (unsigned mod, tree ns, tree id, binding_slot *mslot)
18698 {
18699 int count = errorcount + warningcount;
18700
18701 timevar_start (TV_MODULE_IMPORT);
18702
18703 /* Stop GC happening, even in outermost loads (because our caller
18704 could well be building up a lookup set). */
18705 function_depth++;
18706
18707 gcc_checking_assert (mod);
18708 module_state *module = (*modules)[mod];
18709 unsigned n = dump.push (module);
18710
18711 unsigned snum = mslot->get_lazy ();
18712 dump () && dump ("Lazily binding %P@%N section:%u", ns, id,
18713 module->name, snum);
18714
18715 bool ok = !recursive_lazy (snum);
18716 if (ok)
18717 {
18718 ok = module->load_section (snum, mslot);
18719 lazy_snum = 0;
18720 }
18721
18722 dump.pop (n);
18723
18724 function_depth--;
18725
18726 timevar_stop (TV_MODULE_IMPORT);
18727
18728 if (!ok)
18729 fatal_error (input_location,
18730 module->is_header ()
18731 ? G_("failed to load binding %<%E%s%E%>")
18732 : G_("failed to load binding %<%E%s%E@%s%>"),
18733 ns, &"::"[ns == global_namespace ? 2 : 0], id,
18734 module->get_flatname ());
18735
18736 if (count != errorcount + warningcount)
18737 inform (input_location,
18738 module->is_header ()
18739 ? G_("during load of binding %<%E%s%E%>")
18740 : G_("during load of binding %<%E%s%E@%s%>"),
18741 ns, &"::"[ns == global_namespace ? 2 : 0], id,
18742 module->get_flatname ());
18743 }
18744
18745 /* Load any pending specializations of TMPL. Called just before
18746 instantiating TMPL. */
18747
18748 void
18749 lazy_load_specializations (tree tmpl)
18750 {
18751 gcc_checking_assert (DECL_MODULE_PENDING_SPECIALIZATIONS_P (tmpl)
18752 && DECL_MODULE_ENTITY_P (tmpl));
18753
18754 int count = errorcount + warningcount;
18755
18756 timevar_start (TV_MODULE_IMPORT);
18757 bool ok = !recursive_lazy ();
18758 if (ok)
18759 {
18760 unsigned ident = import_entity_index (tmpl);
18761 if (pendset *set = pending_table->get (ident, true))
18762 {
18763 function_depth++; /* Prevent GC */
18764 unsigned n = dump.push (NULL);
18765 dump ()
18766 && dump ("Reading %u pending specializations keyed to %M[%u] %N",
18767 set->num, import_entity_module (ident),
18768 ident - import_entity_module (ident)->entity_lwm, tmpl);
18769 if (!pendset_lazy_load (set, true))
18770 ok = false;
18771 dump.pop (n);
18772
18773 function_depth--;
18774 }
18775 lazy_snum = 0;
18776 }
18777
18778 timevar_stop (TV_MODULE_IMPORT);
18779
18780 if (!ok)
18781 fatal_error (input_location, "failed to load specializations keyed to %qD",
18782 tmpl);
18783
18784 if (count != errorcount + warningcount)
18785 inform (input_location,
18786 "during load of specializations keyed to %qD", tmpl);
18787 }
18788
18789 void
18790 lazy_load_members (tree decl)
18791 {
18792 gcc_checking_assert (DECL_MODULE_PENDING_MEMBERS_P (decl));
18793 if (!DECL_MODULE_ENTITY_P (decl))
18794 {
18795 // FIXME: I can't help feeling that DECL_TEMPLATE_RESULT should
18796 // be inserted into the entity map, or perhaps have the same
18797 // DECL_UID as the template, so I don't have to do this dance
18798 // here and elsewhere. It also simplifies when DECL is a
18799 // partial specialization. (also noted elsewhere as an issue)
18800 tree ti = CLASSTYPE_TEMPLATE_INFO (TREE_TYPE (decl));
18801 tree tmpl = TI_TEMPLATE (ti);
18802 gcc_checking_assert (DECL_TEMPLATE_RESULT (tmpl) == decl);
18803 decl = tmpl;
18804 }
18805
18806 timevar_start (TV_MODULE_IMPORT);
18807 unsigned ident = import_entity_index (decl);
18808 if (pendset *set = pending_table->get (~ident, true))
18809 {
18810 function_depth++; /* Prevent GC */
18811 unsigned n = dump.push (NULL);
18812 dump () && dump ("Reading %u pending members keyed to %M[%u] %N",
18813 set->num, import_entity_module (ident),
18814 ident - import_entity_module (ident)->entity_lwm, decl);
18815 pendset_lazy_load (set, false);
18816 dump.pop (n);
18817
18818 function_depth--;
18819 }
18820 timevar_stop (TV_MODULE_IMPORT);
18821 }
18822
18823 static void
18824 direct_import (module_state *import, cpp_reader *reader)
18825 {
18826 timevar_start (TV_MODULE_IMPORT);
18827 unsigned n = dump.push (import);
18828
18829 gcc_checking_assert (import->is_direct () && import->is_rooted ());
18830 if (import->loadedness == ML_NONE)
18831 if (!import->do_import (reader, true))
18832 gcc_unreachable ();
18833
18834 if (import->loadedness < ML_LANGUAGE)
18835 {
18836 if (!attached_table)
18837 attached_table = new attachset::hash (EXPERIMENT (1, 400));
18838 import->read_language (true);
18839 }
18840
18841 (*modules)[0]->set_import (import, import->exported_p);
18842
18843 dump.pop (n);
18844 timevar_stop (TV_MODULE_IMPORT);
18845 }
18846
18847 /* Import module IMPORT. */
18848
18849 void
18850 import_module (module_state *import, location_t from_loc, bool exporting_p,
18851 tree, cpp_reader *reader)
18852 {
18853 if (!import->check_not_purview (from_loc))
18854 return;
18855
18856 if (!import->is_header () && current_lang_depth ())
18857 /* Only header units should appear inside language
18858 specifications. The std doesn't specify this, but I think
18859 that's an error in resolving US 033, because language linkage
18860 is also our escape clause to getting things into the global
18861 module, so we don't want to confuse things by having to think
18862 about whether 'extern "C++" { import foo; }' puts foo's
18863 contents into the global module all of a sudden. */
18864 warning (0, "import of named module %qs inside language-linkage block",
18865 import->get_flatname ());
18866
18867 if (exporting_p || module_exporting_p ())
18868 import->exported_p = true;
18869
18870 if (import->loadedness != ML_NONE)
18871 {
18872 from_loc = ordinary_loc_of (line_table, from_loc);
18873 linemap_module_reparent (line_table, import->loc, from_loc);
18874 }
18875 gcc_checking_assert (!import->module_p);
18876 gcc_checking_assert (import->is_direct () && import->is_rooted ());
18877
18878 direct_import (import, reader);
18879 }
18880
18881 /* Declare the name of the current module to be NAME. EXPORTING_p is
18882 true if this TU is the exporting module unit. */
18883
18884 void
18885 declare_module (module_state *module, location_t from_loc, bool exporting_p,
18886 tree, cpp_reader *reader)
18887 {
18888 gcc_assert (global_namespace == current_scope ());
18889
18890 module_state *current = (*modules)[0];
18891 if (module_purview_p () || module->loadedness != ML_NONE)
18892 {
18893 error_at (from_loc, module_purview_p ()
18894 ? G_("module already declared")
18895 : G_("module already imported"));
18896 if (module_purview_p ())
18897 module = current;
18898 inform (module->loc, module_purview_p ()
18899 ? G_("module %qs declared here")
18900 : G_("module %qs imported here"),
18901 module->get_flatname ());
18902 return;
18903 }
18904
18905 gcc_checking_assert (module->module_p);
18906 gcc_checking_assert (module->is_direct () && module->is_rooted ());
18907
18908 /* Yer a module, 'arry. */
18909 module_kind &= ~MK_GLOBAL;
18910 module_kind |= MK_MODULE;
18911
18912 if (module->is_partition () || exporting_p)
18913 {
18914 gcc_checking_assert (module->get_flatname ());
18915
18916 if (module->is_partition ())
18917 module_kind |= MK_PARTITION;
18918
18919 if (exporting_p)
18920 {
18921 module->interface_p = true;
18922 module_kind |= MK_INTERFACE;
18923 }
18924
18925 if (module->is_header ())
18926 module_kind |= MK_GLOBAL | MK_EXPORTING;
18927
18928 /* Copy the importing information we may have already done. We
18929 do not need to separate out the imports that only happen in
18930 the GMF, inspite of what the literal wording of the std
18931 might imply. See p2191, the core list had a discussion
18932 where the module implementors agreed that the GMF of a named
18933 module is invisible to importers. */
18934 module->imports = current->imports;
18935
18936 module->mod = 0;
18937 (*modules)[0] = module;
18938 }
18939 else
18940 {
18941 module->interface_p = true;
18942 current->parent = module; /* So mangler knows module identity. */
18943 direct_import (module, reader);
18944 }
18945 }
18946
18947 /* +1, we're the primary or a partition. Therefore emitting a
18948 globally-callable idemportent initializer function.
18949 -1, we have direct imports. Therefore emitting calls to their
18950 initializers. */
18951
18952 int
18953 module_initializer_kind ()
18954 {
18955 int result = 0;
18956
18957 if (module_has_cmi_p () && !header_module_p ())
18958 result = +1;
18959 else if (num_init_calls_needed)
18960 result = -1;
18961
18962 return result;
18963 }
18964
18965 /* Emit calls to each direct import's global initializer. Including
18966 direct imports of directly imported header units. The initializers
18967 of (static) entities in header units will be called by their
18968 importing modules (for the instance contained within that), or by
18969 the current TU (for the instances we've brought in). Of course
18970 such header unit behaviour is evil, but iostream went through that
18971 door some time ago. */
18972
18973 void
18974 module_add_import_initializers ()
18975 {
18976 unsigned calls = 0;
18977 if (modules)
18978 {
18979 tree fntype = build_function_type (void_type_node, void_list_node);
18980 releasing_vec args; // There are no args
18981
18982 for (unsigned ix = modules->length (); --ix;)
18983 {
18984 module_state *import = (*modules)[ix];
18985 if (import->call_init_p)
18986 {
18987 tree name = mangle_module_global_init (ix);
18988 tree fndecl = build_lang_decl (FUNCTION_DECL, name, fntype);
18989
18990 DECL_CONTEXT (fndecl) = FROB_CONTEXT (global_namespace);
18991 SET_DECL_ASSEMBLER_NAME (fndecl, name);
18992 TREE_PUBLIC (fndecl) = true;
18993 determine_visibility (fndecl);
18994
18995 tree call = cp_build_function_call_vec (fndecl, &args,
18996 tf_warning_or_error);
18997 finish_expr_stmt (call);
18998
18999 calls++;
19000 }
19001 }
19002 }
19003
19004 gcc_checking_assert (calls == num_init_calls_needed);
19005 }
19006
19007 /* NAME & LEN are a preprocessed header name, possibly including the
19008 surrounding "" or <> characters. Return the raw string name of the
19009 module to which it refers. This will be an absolute path, or begin
19010 with ./, so it is immediately distinguishable from a (non-header
19011 unit) module name. If READER is non-null, ask the preprocessor to
19012 locate the header to which it refers using the appropriate include
19013 path. Note that we do never do \ processing of the string, as that
19014 matches the preprocessor's behaviour. */
19015
19016 static const char *
19017 canonicalize_header_name (cpp_reader *reader, location_t loc, bool unquoted,
19018 const char *str, size_t &len_r)
19019 {
19020 size_t len = len_r;
19021 static char *buf = 0;
19022 static size_t alloc = 0;
19023
19024 if (!unquoted)
19025 {
19026 gcc_checking_assert (len >= 2
19027 && ((reader && str[0] == '<' && str[len-1] == '>')
19028 || (str[0] == '"' && str[len-1] == '"')));
19029 str += 1;
19030 len -= 2;
19031 }
19032
19033 if (reader)
19034 {
19035 gcc_assert (!unquoted);
19036
19037 if (len >= alloc)
19038 {
19039 alloc = len + 1;
19040 buf = XRESIZEVEC (char, buf, alloc);
19041 }
19042 memcpy (buf, str, len);
19043 buf[len] = 0;
19044
19045 if (const char *hdr
19046 = cpp_find_header_unit (reader, buf, str[-1] == '<', loc))
19047 {
19048 len = strlen (hdr);
19049 str = hdr;
19050 }
19051 else
19052 str = buf;
19053 }
19054
19055 if (!(str[0] == '.' ? IS_DIR_SEPARATOR (str[1]) : IS_ABSOLUTE_PATH (str)))
19056 {
19057 /* Prepend './' */
19058 if (len + 3 > alloc)
19059 {
19060 alloc = len + 3;
19061 buf = XRESIZEVEC (char, buf, alloc);
19062 }
19063
19064 buf[0] = '.';
19065 buf[1] = DIR_SEPARATOR;
19066 memmove (buf + 2, str, len);
19067 len += 2;
19068 buf[len] = 0;
19069 str = buf;
19070 }
19071
19072 len_r = len;
19073 return str;
19074 }
19075
19076 /* Set the CMI name from a cody packet. Issue an error if
19077 ill-formed. */
19078
19079 void module_state::set_filename (const Cody::Packet &packet)
19080 {
19081 gcc_checking_assert (!filename);
19082 if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19083 filename = xstrdup (packet.GetString ().c_str ());
19084 else
19085 {
19086 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19087 error_at (loc, "unknown Compiled Module Interface: %s",
19088 packet.GetString ().c_str ());
19089 }
19090 }
19091
19092 /* Figure out whether to treat HEADER as an include or an import. */
19093
19094 static char *
19095 maybe_translate_include (cpp_reader *reader, line_maps *lmaps, location_t loc,
19096 const char *path)
19097 {
19098 if (!modules_p ())
19099 {
19100 /* Turn off. */
19101 cpp_get_callbacks (reader)->translate_include = NULL;
19102 return nullptr;
19103 }
19104
19105 if (!spans.init_p ())
19106 /* Before the main file, don't divert. */
19107 return nullptr;
19108
19109 dump.push (NULL);
19110
19111 dump () && dump ("Checking include translation '%s'", path);
19112 auto *mapper = get_mapper (cpp_main_loc (reader));
19113
19114 size_t len = strlen (path);
19115 path = canonicalize_header_name (NULL, loc, true, path, len);
19116 auto packet = mapper->IncludeTranslate (path, Cody::Flags::None, len);
19117 int xlate = false;
19118 if (packet.GetCode () == Cody::Client::PC_BOOL)
19119 xlate = -int (packet.GetInteger ());
19120 else if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19121 {
19122 /* Record the CMI name for when we do the import. */
19123 module_state *import = get_module (build_string (len, path));
19124 import->set_filename (packet);
19125 xlate = +1;
19126 }
19127 else
19128 {
19129 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19130 error_at (loc, "cannot determine %<#include%> translation of %s: %s",
19131 path, packet.GetString ().c_str ());
19132 }
19133
19134 bool note = false;
19135 if (note_include_translate_yes && xlate > 1)
19136 note = true;
19137 else if (note_include_translate_no && xlate == 0)
19138 note = true;
19139 else if (note_includes)
19140 {
19141 /* We do not expect the note_includes vector to be large, so O(N)
19142 iteration. */
19143 for (unsigned ix = note_includes->length (); !note && ix--;)
19144 {
19145 const char *hdr = (*note_includes)[ix];
19146 size_t hdr_len = strlen (hdr);
19147 if ((hdr_len == len
19148 || (hdr_len < len && IS_DIR_SEPARATOR (path[len - hdr_len - 1])))
19149 && !memcmp (hdr, path + len - hdr_len, hdr_len))
19150 note = true;
19151 }
19152 }
19153
19154 if (note)
19155 inform (loc, xlate
19156 ? G_("include %qs translated to import")
19157 : G_("include %qs processed textually") , path);
19158
19159 dump () && dump (xlate ? "Translating include to import"
19160 : "Keeping include as include");
19161 dump.pop (0);
19162
19163 if (!(xlate > 0))
19164 return nullptr;
19165
19166 /* Create the translation text. */
19167 loc = ordinary_loc_of (lmaps, loc);
19168 const line_map_ordinary *map
19169 = linemap_check_ordinary (linemap_lookup (lmaps, loc));
19170 unsigned col = SOURCE_COLUMN (map, loc);
19171 col -= (col != 0); /* Columns are 1-based. */
19172
19173 unsigned alloc = len + col + 60;
19174 char *res = XNEWVEC (char, alloc);
19175
19176 strcpy (res, "__import");
19177 unsigned actual = 8;
19178 if (col > actual)
19179 {
19180 /* Pad out so the filename appears at the same position. */
19181 memset (res + actual, ' ', col - actual);
19182 actual = col;
19183 }
19184 /* No need to encode characters, that's not how header names are
19185 handled. */
19186 actual += snprintf (res + actual, alloc - actual,
19187 "\"%s\" [[__translated]];\n", path);
19188 gcc_checking_assert (actual < alloc);
19189
19190 /* cpplib will delete the buffer. */
19191 return res;
19192 }
19193
19194 static void
19195 begin_header_unit (cpp_reader *reader)
19196 {
19197 /* Set the module header name from the main_input_filename. */
19198 const char *main = main_input_filename;
19199 size_t len = strlen (main);
19200 main = canonicalize_header_name (NULL, 0, true, main, len);
19201 module_state *module = get_module (build_string (len, main));
19202
19203 preprocess_module (module, cpp_main_loc (reader), false, false, true, reader);
19204 }
19205
19206 /* We've just properly entered the main source file. I.e. after the
19207 command line, builtins and forced headers. Record the line map and
19208 location of this map. Note we may be called more than once. The
19209 first call sticks. */
19210
19211 void
19212 module_begin_main_file (cpp_reader *reader, line_maps *lmaps,
19213 const line_map_ordinary *map)
19214 {
19215 gcc_checking_assert (lmaps == line_table);
19216 if (modules_p () && !spans.init_p ())
19217 {
19218 unsigned n = dump.push (NULL);
19219 spans.init (lmaps, map);
19220 dump.pop (n);
19221 if (flag_header_unit && !cpp_get_options (reader)->preprocessed)
19222 {
19223 /* Tell the preprocessor this is an include file. */
19224 cpp_retrofit_as_include (reader);
19225 begin_header_unit (reader);
19226 }
19227 }
19228 }
19229
19230 /* We've just lexed a module-specific control line for MODULE. Mark
19231 the module as a direct import, and possibly load up its macro
19232 state. Returns the primary module, if this is a module
19233 declaration. */
19234 /* Perhaps we should offer a preprocessing mode where we read the
19235 directives from the header unit, rather than require the header's
19236 CMI. */
19237
19238 module_state *
19239 preprocess_module (module_state *module, location_t from_loc,
19240 bool in_purview, bool is_import, bool is_export,
19241 cpp_reader *reader)
19242 {
19243 if (!is_import)
19244 {
19245 if (module->loc)
19246 /* It's already been mentioned, so ignore its module-ness. */
19247 is_import = true;
19248 else
19249 {
19250 /* Record it is the module. */
19251 module->module_p = true;
19252 if (is_export)
19253 {
19254 module->exported_p = true;
19255 module->interface_p = true;
19256 }
19257 }
19258 }
19259
19260 if (module->directness < MD_DIRECT + in_purview)
19261 {
19262 /* Mark as a direct import. */
19263 module->directness = module_directness (MD_DIRECT + in_purview);
19264
19265 /* Set the location to be most informative for users. */
19266 from_loc = ordinary_loc_of (line_table, from_loc);
19267 if (module->loadedness != ML_NONE)
19268 linemap_module_reparent (line_table, module->loc, from_loc);
19269 else
19270 {
19271 module->loc = from_loc;
19272 if (!module->flatname)
19273 module->set_flatname ();
19274 }
19275 }
19276
19277 if (is_import
19278 && !module->is_module () && module->is_header ()
19279 && module->loadedness < ML_PREPROCESSOR
19280 && (!cpp_get_options (reader)->preprocessed
19281 || cpp_get_options (reader)->directives_only))
19282 {
19283 timevar_start (TV_MODULE_IMPORT);
19284 unsigned n = dump.push (module);
19285
19286 if (module->loadedness == ML_NONE)
19287 {
19288 unsigned pre_hwm = 0;
19289
19290 /* Preserve the state of the line-map. */
19291 pre_hwm = LINEMAPS_ORDINARY_USED (line_table);
19292 /* We only need to close the span, if we're going to emit a
19293 CMI. But that's a little tricky -- our token scanner
19294 needs to be smarter -- and this isn't much state.
19295 Remember, we've not parsed anything at this point, so
19296 our module state flags are inadequate. */
19297 spans.maybe_init ();
19298 spans.close ();
19299
19300 if (!module->filename)
19301 {
19302 auto *mapper = get_mapper (cpp_main_loc (reader));
19303 auto packet = mapper->ModuleImport (module->get_flatname ());
19304 module->set_filename (packet);
19305 }
19306 module->do_import (reader, true);
19307
19308 /* Restore the line-map state. */
19309 linemap_module_restore (line_table, pre_hwm);
19310 spans.open ();
19311 }
19312
19313 if (module->loadedness < ML_PREPROCESSOR)
19314 if (module->read_preprocessor (true))
19315 module->import_macros ();
19316
19317 dump.pop (n);
19318 timevar_stop (TV_MODULE_IMPORT);
19319 }
19320
19321 return is_import ? NULL : get_primary (module);
19322 }
19323
19324 /* We've completed phase-4 translation. Emit any dependency
19325 information for the not-yet-loaded direct imports, and fill in
19326 their file names. We'll have already loaded up the direct header
19327 unit wavefront. */
19328
19329 void
19330 preprocessed_module (cpp_reader *reader)
19331 {
19332 auto *mapper = get_mapper (cpp_main_loc (reader));
19333
19334 spans.maybe_init ();
19335 spans.close ();
19336
19337 /* Stupid GTY doesn't grok a typedef here. And using type = is, too
19338 modern. */
19339 #define iterator hash_table<module_state_hash>::iterator
19340 /* using iterator = hash_table<module_state_hash>::iterator; */
19341
19342 /* Walk the module hash, asking for the names of all unknown
19343 direct imports and informing of an export (if that's what we
19344 are). Notice these are emitted even when preprocessing as they
19345 inform the server of dependency edges. */
19346 timevar_start (TV_MODULE_MAPPER);
19347
19348 dump.push (NULL);
19349 dump () && dump ("Resolving direct import names");
19350
19351 if (!flag_preprocess_only
19352 || bool (mapper->get_flags () & Cody::Flags::NameOnly)
19353 || cpp_get_deps (reader))
19354 {
19355 mapper->Cork ();
19356 iterator end = modules_hash->end ();
19357 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19358 {
19359 module_state *module = *iter;
19360 if (module->is_direct () && !module->filename)
19361 {
19362 Cody::Flags flags
19363 = (flag_preprocess_only ? Cody::Flags::None
19364 : Cody::Flags::NameOnly);
19365
19366 if (module->module_p
19367 && (module->is_partition () || module->exported_p))
19368 mapper->ModuleExport (module->get_flatname (), flags);
19369 else
19370 mapper->ModuleImport (module->get_flatname (), flags);
19371 }
19372 }
19373
19374 auto response = mapper->Uncork ();
19375 auto r_iter = response.begin ();
19376 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19377 {
19378 module_state *module = *iter;
19379
19380 if (module->is_direct () && !module->filename)
19381 {
19382 Cody::Packet const &p = *r_iter;
19383 ++r_iter;
19384
19385 module->set_filename (p);
19386 }
19387 }
19388 }
19389
19390 dump.pop (0);
19391
19392 timevar_stop (TV_MODULE_MAPPER);
19393
19394 if (mkdeps *deps = cpp_get_deps (reader))
19395 {
19396 /* Walk the module hash, informing the dependency machinery. */
19397 iterator end = modules_hash->end ();
19398 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19399 {
19400 module_state *module = *iter;
19401
19402 if (module->is_direct ())
19403 {
19404 if (module->is_module ()
19405 && (module->is_interface () || module->is_partition ()))
19406 deps_add_module_target (deps, module->get_flatname (),
19407 maybe_add_cmi_prefix (module->filename),
19408 module->is_header());
19409 else
19410 deps_add_module_dep (deps, module->get_flatname ());
19411 }
19412 }
19413 }
19414
19415 if (flag_header_unit && !flag_preprocess_only)
19416 {
19417 iterator end = modules_hash->end ();
19418 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19419 {
19420 module_state *module = *iter;
19421 if (module->is_module ())
19422 {
19423 declare_module (module, cpp_main_loc (reader), true, NULL, reader);
19424 break;
19425 }
19426 }
19427 }
19428 #undef iterator
19429 }
19430
19431 /* VAL is a global tree, add it to the global vec if it is
19432 interesting. Add some of its targets, if they too are
19433 interesting. We do not add identifiers, as they can be re-found
19434 via the identifier hash table. There is a cost to the number of
19435 global trees. */
19436
19437 static int
19438 maybe_add_global (tree val, unsigned &crc)
19439 {
19440 int v = 0;
19441
19442 if (val && !(identifier_p (val) || TREE_VISITED (val)))
19443 {
19444 TREE_VISITED (val) = true;
19445 crc = crc32_unsigned (crc, fixed_trees->length ());
19446 vec_safe_push (fixed_trees, val);
19447 v++;
19448
19449 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPED))
19450 v += maybe_add_global (TREE_TYPE (val), crc);
19451 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPE_COMMON))
19452 v += maybe_add_global (TYPE_NAME (val), crc);
19453 }
19454
19455 return v;
19456 }
19457
19458 /* Initialize module state. Create the hash table, determine the
19459 global trees. Create the module for current TU. */
19460
19461 void
19462 init_modules (cpp_reader *reader)
19463 {
19464 /* PCH should not be reachable because of lang-specs, but the
19465 user could have overriden that. */
19466 if (pch_file)
19467 fatal_error (input_location,
19468 "C++ modules are incompatible with precompiled headers");
19469
19470 if (cpp_get_options (reader)->traditional)
19471 fatal_error (input_location,
19472 "C++ modules are incompatible with traditional preprocessing");
19473
19474 if (flag_preprocess_only)
19475 {
19476 cpp_options *cpp_opts = cpp_get_options (reader);
19477 if (flag_no_output
19478 || (cpp_opts->deps.style != DEPS_NONE
19479 && !cpp_opts->deps.need_preprocessor_output))
19480 {
19481 warning (0, flag_dump_macros == 'M'
19482 ? G_("macro debug output may be incomplete with modules")
19483 : G_("module dependencies require preprocessing"));
19484 if (cpp_opts->deps.style != DEPS_NONE)
19485 inform (input_location, "you should use the %<-%s%> option",
19486 cpp_opts->deps.style == DEPS_SYSTEM ? "MD" : "MMD");
19487 }
19488 }
19489
19490 /* :: is always exported. */
19491 DECL_MODULE_EXPORT_P (global_namespace) = true;
19492
19493 modules_hash = hash_table<module_state_hash>::create_ggc (31);
19494 vec_safe_reserve (modules, 20);
19495
19496 /* Create module for current TU. */
19497 module_state *current
19498 = new (ggc_alloc<module_state> ()) module_state (NULL_TREE, NULL, false);
19499 current->mod = 0;
19500 bitmap_set_bit (current->imports, 0);
19501 modules->quick_push (current);
19502
19503 gcc_checking_assert (!fixed_trees);
19504
19505 headers = BITMAP_GGC_ALLOC ();
19506
19507 if (note_includes)
19508 for (unsigned ix = 0; ix != note_includes->length (); ix++)
19509 {
19510 const char *hdr = (*note_includes)[ix];
19511 size_t len = strlen (hdr);
19512
19513 bool system = hdr[0] == '<';
19514 bool user = hdr[0] == '"';
19515 bool delimed = system || user;
19516
19517 if (len <= (delimed ? 2 : 0)
19518 || (delimed && hdr[len-1] != (system ? '>' : '"')))
19519 error ("invalid header name %qs", hdr);
19520
19521 hdr = canonicalize_header_name (delimed ? reader : NULL,
19522 0, !delimed, hdr, len);
19523 char *path = XNEWVEC (char, len + 1);
19524 memcpy (path, hdr, len);
19525 path[len+1] = 0;
19526
19527 (*note_includes)[ix] = path;
19528 }
19529
19530 dump.push (NULL);
19531
19532 /* Determine lazy handle bound. */
19533 {
19534 unsigned limit = 1000;
19535 #if HAVE_GETRLIMIT
19536 struct rlimit rlimit;
19537 if (!getrlimit (RLIMIT_NOFILE, &rlimit))
19538 {
19539 lazy_hard_limit = (rlimit.rlim_max < 1000000
19540 ? unsigned (rlimit.rlim_max) : 1000000);
19541 lazy_hard_limit = (lazy_hard_limit > LAZY_HEADROOM
19542 ? lazy_hard_limit - LAZY_HEADROOM : 0);
19543 if (rlimit.rlim_cur < limit)
19544 limit = unsigned (rlimit.rlim_cur);
19545 }
19546 #endif
19547 limit = limit > LAZY_HEADROOM ? limit - LAZY_HEADROOM : 1;
19548
19549 if (unsigned parm = param_lazy_modules)
19550 {
19551 if (parm <= limit || !lazy_hard_limit || !try_increase_lazy (parm))
19552 lazy_limit = parm;
19553 }
19554 else
19555 lazy_limit = limit;
19556 }
19557
19558 if (dump ())
19559 {
19560 verstr_t ver;
19561 version2string (MODULE_VERSION, ver);
19562 dump ("Source: %s", main_input_filename);
19563 dump ("Compiler: %s", version_string);
19564 dump ("Modules: %s", ver);
19565 dump ("Checking: %s",
19566 #if CHECKING_P
19567 "checking"
19568 #elif ENABLE_ASSERT_CHECKING
19569 "asserting"
19570 #else
19571 "release"
19572 #endif
19573 );
19574 dump ("Compiled by: "
19575 #ifdef __GNUC__
19576 "GCC %d.%d, %s", __GNUC__, __GNUC_MINOR__,
19577 #ifdef __OPTIMIZE__
19578 "optimizing"
19579 #else
19580 "not optimizing"
19581 #endif
19582 #else
19583 "not GCC"
19584 #endif
19585 );
19586 dump ("Reading: %s", MAPPED_READING ? "mmap" : "fileio");
19587 dump ("Writing: %s", MAPPED_WRITING ? "mmap" : "fileio");
19588 dump ("Lazy limit: %u", lazy_limit);
19589 dump ("Lazy hard limit: %u", lazy_hard_limit);
19590 dump ("");
19591 }
19592
19593 /* Construct the global tree array. This is an array of unique
19594 global trees (& types). Do this now, rather than lazily, as
19595 some global trees are lazily created and we don't want that to
19596 mess with our syndrome of fixed trees. */
19597 unsigned crc = 0;
19598 vec_alloc (fixed_trees, 200);
19599
19600 dump () && dump ("+Creating globals");
19601 /* Insert the TRANSLATION_UNIT_DECL. */
19602 TREE_VISITED (DECL_CONTEXT (global_namespace)) = true;
19603 fixed_trees->quick_push (DECL_CONTEXT (global_namespace));
19604 for (unsigned jx = 0; global_tree_arys[jx].first; jx++)
19605 {
19606 const tree *ptr = global_tree_arys[jx].first;
19607 unsigned limit = global_tree_arys[jx].second;
19608
19609 for (unsigned ix = 0; ix != limit; ix++, ptr++)
19610 {
19611 !(ix & 31) && dump ("") && dump ("+\t%u:%u:", jx, ix);
19612 unsigned v = maybe_add_global (*ptr, crc);
19613 dump () && dump ("+%u", v);
19614 }
19615 }
19616 global_crc = crc32_unsigned (crc, fixed_trees->length ());
19617 dump ("") && dump ("Created %u unique globals, crc=%x",
19618 fixed_trees->length (), global_crc);
19619 for (unsigned ix = fixed_trees->length (); ix--;)
19620 TREE_VISITED ((*fixed_trees)[ix]) = false;
19621
19622 dump.pop (0);
19623
19624 if (!flag_module_lazy)
19625 /* Get the mapper now, if we're not being lazy. */
19626 get_mapper (cpp_main_loc (reader));
19627
19628 if (!flag_preprocess_only)
19629 {
19630 pending_table = new pendset::hash (EXPERIMENT (1, 400));
19631
19632 entity_map = new entity_map_t (EXPERIMENT (1, 400));
19633 vec_safe_reserve (entity_ary, EXPERIMENT (1, 400));
19634 }
19635
19636 #if CHECKING_P
19637 note_defs = note_defs_table_t::create_ggc (1000);
19638 #endif
19639
19640 if (flag_header_unit && cpp_get_options (reader)->preprocessed)
19641 begin_header_unit (reader);
19642
19643 /* Collect here to make sure things are tagged correctly (when
19644 aggressively GC'd). */
19645 ggc_collect ();
19646 }
19647
19648 /* If NODE is a deferred macro, load it. */
19649
19650 static int
19651 load_macros (cpp_reader *reader, cpp_hashnode *node, void *)
19652 {
19653 location_t main_loc
19654 = MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0));
19655
19656 if (cpp_user_macro_p (node)
19657 && !node->value.macro)
19658 {
19659 cpp_macro *macro = cpp_get_deferred_macro (reader, node, main_loc);
19660 dump () && dump ("Loaded macro #%s %I",
19661 macro ? "define" : "undef", identifier (node));
19662 }
19663
19664 return 1;
19665 }
19666
19667 /* At the end of tokenizing, we no longer need the macro tables of
19668 imports. But the user might have requested some checking. */
19669
19670 void
19671 maybe_check_all_macros (cpp_reader *reader)
19672 {
19673 if (!warn_imported_macros)
19674 return;
19675
19676 /* Force loading of any remaining deferred macros. This will
19677 produce diagnostics if they are ill-formed. */
19678 unsigned n = dump.push (NULL);
19679 cpp_forall_identifiers (reader, load_macros, NULL);
19680 dump.pop (n);
19681 }
19682
19683 /* Write the CMI, if we're a module interface. */
19684
19685 void
19686 finish_module_processing (cpp_reader *reader)
19687 {
19688 if (header_module_p ())
19689 module_kind &= ~MK_EXPORTING;
19690
19691 if (!modules || !(*modules)[0]->name)
19692 {
19693 if (flag_module_only)
19694 warning (0, "%<-fmodule-only%> used for non-interface");
19695 }
19696 else if (!flag_syntax_only)
19697 {
19698 int fd = -1;
19699 int e = ENOENT;
19700
19701 timevar_start (TV_MODULE_EXPORT);
19702
19703 /* Force a valid but empty line map at the end. This simplifies
19704 the line table preparation and writing logic. */
19705 linemap_add (line_table, LC_ENTER, false, "", 0);
19706
19707 /* We write to a tmpname, and then atomically rename. */
19708 const char *path = NULL;
19709 char *tmp_name = NULL;
19710 module_state *state = (*modules)[0];
19711
19712 unsigned n = dump.push (state);
19713 state->announce ("creating");
19714 if (state->filename)
19715 {
19716 size_t len = 0;
19717 path = maybe_add_cmi_prefix (state->filename, &len);
19718 tmp_name = XNEWVEC (char, len + 3);
19719 memcpy (tmp_name, path, len);
19720 strcpy (&tmp_name[len], "~");
19721
19722 if (!errorcount)
19723 for (unsigned again = 2; ; again--)
19724 {
19725 fd = open (tmp_name,
19726 O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC | O_BINARY,
19727 S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH);
19728 e = errno;
19729 if (fd >= 0 || !again || e != ENOENT)
19730 break;
19731 create_dirs (tmp_name);
19732 }
19733 dump () && dump ("CMI is %s", path);
19734 }
19735
19736 if (errorcount)
19737 warning_at (state->loc, 0, "not writing module %qs due to errors",
19738 state->get_flatname ());
19739 else
19740 {
19741 elf_out to (fd, e);
19742 if (to.begin ())
19743 {
19744 auto loc = input_location;
19745 /* So crashes finger point the module decl. */
19746 input_location = state->loc;
19747 state->write (&to, reader);
19748 input_location = loc;
19749 }
19750 if (to.end ())
19751 {
19752 /* Some OS's do not replace NEWNAME if it already
19753 exists. This'll have a race condition in erroneous
19754 concurrent builds. */
19755 unlink (path);
19756 if (rename (tmp_name, path))
19757 {
19758 dump () && dump ("Rename ('%s','%s') errno=%u", errno);
19759 to.set_error (errno);
19760 }
19761 }
19762
19763 if (to.get_error ())
19764 {
19765 error_at (state->loc, "failed to write compiled module: %s",
19766 to.get_error (state->filename));
19767 state->note_cmi_name ();
19768 }
19769 }
19770
19771 if (!errorcount)
19772 {
19773 auto *mapper = get_mapper (cpp_main_loc (reader));
19774
19775 mapper->ModuleCompiled (state->get_flatname ());
19776 }
19777 else if (path)
19778 {
19779 /* We failed, attempt to erase all evidence we even tried. */
19780 unlink (tmp_name);
19781 unlink (path);
19782 XDELETEVEC (tmp_name);
19783 }
19784
19785 dump.pop (n);
19786 timevar_stop (TV_MODULE_EXPORT);
19787
19788 ggc_collect ();
19789 }
19790
19791 if (modules)
19792 {
19793 unsigned n = dump.push (NULL);
19794 dump () && dump ("Imported %u modules", modules->length () - 1);
19795 dump () && dump ("Containing %u clusters", available_clusters);
19796 dump () && dump ("Loaded %u clusters (%u%%)", loaded_clusters,
19797 (loaded_clusters * 100 + available_clusters / 2) /
19798 (available_clusters + !available_clusters));
19799 dump.pop (n);
19800 }
19801
19802 if (modules && !header_module_p ())
19803 {
19804 /* Determine call_init_p. We need the same bitmap allocation
19805 scheme as for the imports member. */
19806 function_depth++; /* Disable GC. */
19807 bitmap indirect_imports (BITMAP_GGC_ALLOC ());
19808
19809 /* Because indirect imports are before their direct import, and
19810 we're scanning the array backwards, we only need one pass! */
19811 for (unsigned ix = modules->length (); --ix;)
19812 {
19813 module_state *import = (*modules)[ix];
19814
19815 if (!import->is_header ()
19816 && !bitmap_bit_p (indirect_imports, ix))
19817 {
19818 /* Everything this imports is therefore indirectly
19819 imported. */
19820 bitmap_ior_into (indirect_imports, import->imports);
19821 /* We don't have to worry about the self-import bit,
19822 because of the single pass. */
19823
19824 import->call_init_p = true;
19825 num_init_calls_needed++;
19826 }
19827 }
19828 function_depth--;
19829 }
19830 }
19831
19832 void
19833 fini_modules ()
19834 {
19835 /* We're done with the macro tables now. */
19836 vec_free (macro_exports);
19837 vec_free (macro_imports);
19838 headers = NULL;
19839
19840 /* We're now done with everything but the module names. */
19841 set_cmi_repo (NULL);
19842 if (mapper)
19843 {
19844 timevar_start (TV_MODULE_MAPPER);
19845 module_client::close_module_client (0, mapper);
19846 mapper = nullptr;
19847 timevar_stop (TV_MODULE_MAPPER);
19848 }
19849 module_state_config::release ();
19850
19851 #if CHECKING_P
19852 note_defs = NULL;
19853 #endif
19854
19855 if (modules)
19856 for (unsigned ix = modules->length (); --ix;)
19857 if (module_state *state = (*modules)[ix])
19858 state->release ();
19859
19860 /* No need to lookup modules anymore. */
19861 modules_hash = NULL;
19862
19863 /* Or entity array. We still need the entity map to find import numbers. */
19864 delete entity_ary;
19865 entity_ary = NULL;
19866
19867 /* Or remember any pending entities. */
19868 delete pending_table;
19869 pending_table = NULL;
19870
19871 /* Or any attachments -- Let it go! */
19872 delete attached_table;
19873 attached_table = NULL;
19874
19875 /* Allow a GC, we've possibly made much data unreachable. */
19876 ggc_collect ();
19877 }
19878
19879 /* If CODE is a module option, handle it & return true. Otherwise
19880 return false. For unknown reasons I cannot get the option
19881 generation machinery to set fmodule-mapper or -fmodule-header to
19882 make a string type option variable. */
19883
19884 bool
19885 handle_module_option (unsigned code, const char *str, int)
19886 {
19887 auto hdr = CMS_header;
19888
19889 switch (opt_code (code))
19890 {
19891 case OPT_fmodule_mapper_:
19892 module_mapper_name = str;
19893 return true;
19894
19895 case OPT_fmodule_header_:
19896 {
19897 if (!strcmp (str, "user"))
19898 hdr = CMS_user;
19899 else if (!strcmp (str, "system"))
19900 hdr = CMS_system;
19901 else
19902 error ("unknown header kind %qs", str);
19903 }
19904 /* Fallthrough. */
19905
19906 case OPT_fmodule_header:
19907 flag_header_unit = hdr;
19908 flag_modules = 1;
19909 return true;
19910
19911 case OPT_flang_info_include_translate_:
19912 vec_safe_push (note_includes, str);
19913 return true;
19914
19915 default:
19916 return false;
19917 }
19918 }
19919
19920 /* Set preprocessor callbacks and options for modules. */
19921
19922 void
19923 module_preprocess_options (cpp_reader *reader)
19924 {
19925 gcc_checking_assert (!lang_hooks.preprocess_undef);
19926 if (modules_p ())
19927 {
19928 auto *cb = cpp_get_callbacks (reader);
19929
19930 cb->translate_include = maybe_translate_include;
19931 cb->user_deferred_macro = module_state::deferred_macro;
19932 if (flag_header_unit)
19933 {
19934 /* If the preprocessor hook is already in use, that
19935 implementation will call the undef langhook. */
19936 if (cb->undef)
19937 lang_hooks.preprocess_undef = module_state::undef_macro;
19938 else
19939 cb->undef = module_state::undef_macro;
19940 }
19941 auto *opt = cpp_get_options (reader);
19942 opt->module_directives = true;
19943 opt->main_search = cpp_main_search (flag_header_unit);
19944 }
19945 }
19946
19947 #include "gt-cp-module.h"