c++: Fix null this pointer [PR 98624]
[gcc.git] / gcc / cp / module.cc
1 /* C++ modules. Experimental!
2 Copyright (C) 2017-2021 Free Software Foundation, Inc.
3 Written by Nathan Sidwell <nathan@acm.org> while at FaceBook
4
5 This file is part of GCC.
6
7 GCC is free software; you can redistribute it and/or modify it
8 under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3, or (at your option)
10 any later version.
11
12 GCC is distributed in the hope that it will be useful, but
13 WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with GCC; see the file COPYING3. If not see
19 <http://www.gnu.org/licenses/>. */
20
21 /* Comments in this file have a non-negligible chance of being wrong
22 or at least inaccurate. Due to (a) my misunderstanding, (b)
23 ambiguities that I have interpretted differently to original intent
24 (c) changes in the specification, (d) my poor wording, (e) source
25 changes. */
26
27 /* (Incomplete) Design Notes
28
29 A hash table contains all module names. Imported modules are
30 present in a modules array, which by construction places an
31 import's dependencies before the import itself. The single
32 exception is the current TU, which always occupies slot zero (even
33 when it is not a module).
34
35 Imported decls occupy an entity_ary, an array of binding_slots, indexed
36 by importing module and index within that module. A flat index is
37 used, as each module reserves a contiguous range of indices.
38 Initially each slot indicates the CMI section containing the
39 streamed decl. When the decl is imported it will point to the decl
40 itself.
41
42 Additionally each imported decl is mapped in the entity_map via its
43 DECL_UID to the flat index in the entity_ary. Thus we can locate
44 the index for any imported decl by using this map and then
45 de-flattening the index via a binary seach of the module vector.
46 Cross-module references are by (remapped) module number and
47 module-local index.
48
49 Each importable DECL contains several flags. The simple set are
50 DECL_EXPORT_P, DECL_MODULE_PURVIEW_P and DECL_MODULE_IMPORT_P. The
51 first indicates whether it is exported, the second whether it is in
52 the module purview (as opposed to the global module fragment), and
53 the third indicates whether it was an import into this TU or not.
54
55 The more detailed flags are DECL_MODULE_PARTITION_P,
56 DECL_MODULE_ENTITY_P & DECL_MODULE_PENDING_SPECIALIZATIONS_P. The
57 first is set in a primary interface unit on decls that were read
58 from module partitions (these will have DECL_MODULE_IMPORT_P set
59 too). Such decls will be streamed out to the primary's CMI.
60 DECL_MODULE_ENTITY_P is set when an entity is imported, even if it
61 matched a non-imported entity. Such a decl will not have
62 DECL_MODULE_IMPORT_P set, even though it has an entry in the entity
63 map and array. DECL_MODULE_PENDING_SPECIALIZATIONS_P is set on a
64 primary template, and indicates there are specializations that
65 should be streamed in before trying to specialize this template.
66
67 Header units are module-like.
68
69 For namespace-scope lookup, the decls for a particular module are
70 held located in a sparse array hanging off the binding of the name.
71 This is partitioned into two: a few fixed slots at the start
72 followed by the sparse slots afterwards. By construction we only
73 need to append new slots to the end -- there is never a need to
74 insert in the middle. The fixed slots are MODULE_SLOT_CURRENT for
75 the current TU (regardless of whether it is a module or not),
76 MODULE_SLOT_GLOBAL and MODULE_SLOT_PARTITION. These latter two
77 slots are used for merging entities across the global module and
78 module partitions respectively. MODULE_SLOT_PARTITION is only
79 present in a module. Neither of those two slots is searched during
80 name lookup -- they are internal use only. This vector is created
81 lazily once we require it, if there is only a declaration from the
82 current TU, a regular binding is present. It is converted on
83 demand.
84
85 OPTIMIZATION: Outside of the current TU, we only need ADL to work.
86 We could optimize regular lookup for the current TU by glomming all
87 the visible decls on its slot. Perhaps wait until design is a
88 little more settled though.
89
90 There is only one instance of each extern-linkage namespace. It
91 appears in every module slot that makes it visible. It also
92 appears in MODULE_SLOT_GLOBAL. (It is an ODR violation if they
93 collide with some other global module entity.) We also have an
94 optimization that shares the slot for adjacent modules that declare
95 the same such namespace.
96
97 A module interface compilation produces a Compiled Module Interface
98 (CMI). The format used is Encapsulated Lazy Records Of Numbered
99 Declarations, which is essentially ELF's section encapsulation. (As
100 all good nerds are aware, Elrond is half Elf.) Some sections are
101 named, and contain information about the module as a whole (indices
102 etc), and other sections are referenced by number. Although I
103 don't defend against actively hostile CMIs, there is some
104 checksumming involved to verify data integrity. When dumping out
105 an interface, we generate a graph of all the
106 independently-redeclarable DECLS that are needed, and the decls
107 they reference. From that we determine the strongly connected
108 components (SCC) within this TU. Each SCC is dumped to a separate
109 numbered section of the CMI. We generate a binding table section,
110 mapping each namespace&name to a defining section. This allows
111 lazy loading.
112
113 Lazy loading employs mmap to map a read-only image of the CMI.
114 It thus only occupies address space and is paged in on demand,
115 backed by the CMI file itself. If mmap is unavailable, regular
116 FILEIO is used. Also, there's a bespoke ELF reader/writer here,
117 which implements just the section table and sections (including
118 string sections) of a 32-bit ELF in host byte-order. You can of
119 course inspect it with readelf. I figured 32-bit is sufficient,
120 for a single module. I detect running out of section numbers, but
121 do not implement the ELF overflow mechanism. At least you'll get
122 an error if that happens.
123
124 We do not separate declarations and definitions. My guess is that
125 if you refer to the declaration, you'll also need the definition
126 (template body, inline function, class definition etc). But this
127 does mean we can get larger SCCs than if we separated them. It is
128 unclear whether this is a win or not.
129
130 Notice that we embed section indices into the contents of other
131 sections. Thus random manipulation of the CMI file by ELF tools
132 may well break it. The kosher way would probably be to introduce
133 indirection via section symbols, but that would require defining a
134 relocation type.
135
136 Notice that lazy loading of one module's decls can cause lazy
137 loading of other decls in the same or another module. Clearly we
138 want to avoid loops. In a correct program there can be no loops in
139 the module dependency graph, and the above-mentioned SCC algorithm
140 places all intra-module circular dependencies in the same SCC. It
141 also orders the SCCs wrt each other, so dependent SCCs come first.
142 As we load dependent modules first, we know there can be no
143 reference to a higher-numbered module, and because we write out
144 dependent SCCs first, likewise for SCCs within the module. This
145 allows us to immediately detect broken references. When loading,
146 we must ensure the rest of the compiler doesn't cause some
147 unconnected load to occur (for instance, instantiate a template).
148
149 Classes used:
150
151 dumper - logger
152
153 data - buffer
154
155 bytes - data streamer
156 bytes_in : bytes - scalar reader
157 bytes_out : bytes - scalar writer
158
159 elf - ELROND format
160 elf_in : elf - ELROND reader
161 elf_out : elf - ELROND writer
162
163 trees_in : bytes_in - tree reader
164 trees_out : bytes_out - tree writer
165
166 depset - dependency set
167 depset::hash - hash table of depsets
168 depset::tarjan - SCC determinator
169
170 uidset<T> - set T's related to a UID
171 uidset<T>::hash hash table of uidset<T>
172
173 loc_spans - location map data
174
175 module_state - module object
176
177 slurping - data needed during loading
178
179 macro_import - imported macro data
180 macro_export - exported macro data
181
182 The ELROND objects use mmap, for both reading and writing. If mmap
183 is unavailable, fileno IO is used to read and write blocks of data.
184
185 The mapper object uses fileno IO to communicate with the server or
186 program. */
187
188 /* In expermental (trunk) sources, MODULE_VERSION is a #define passed
189 in from the Makefile. It records the modification date of the
190 source directory -- that's the only way to stay sane. In release
191 sources, we (plan to) use the compiler's major.minor versioning.
192 While the format might not change between at minor versions, it
193 seems simplest to tie the two together. There's no concept of
194 inter-version compatibility. */
195 #define IS_EXPERIMENTAL(V) ((V) >= (1U << 20))
196 #define MODULE_MAJOR(V) ((V) / 10000)
197 #define MODULE_MINOR(V) ((V) % 10000)
198 #define EXPERIMENT(A,B) (IS_EXPERIMENTAL (MODULE_VERSION) ? (A) : (B))
199 #ifndef MODULE_VERSION
200 // Be sure you're ready! Remove #error this before release!
201 #error "Shtopp! What are you doing? This is not ready yet."
202 #include "bversion.h"
203 #define MODULE_VERSION (BUILDING_GCC_MAJOR * 10000U + BUILDING_GCC_MINOR)
204 #elif !IS_EXPERIMENTAL (MODULE_VERSION)
205 #error "This is not the version I was looking for."
206 #endif
207
208 #define _DEFAULT_SOURCE 1 /* To get TZ field of struct tm, if available. */
209 #include "config.h"
210 #define INCLUDE_STRING
211 #define INCLUDE_VECTOR
212 #include "system.h"
213 #include "coretypes.h"
214 #include "cp-tree.h"
215 #include "timevar.h"
216 #include "stringpool.h"
217 #include "dumpfile.h"
218 #include "bitmap.h"
219 #include "cgraph.h"
220 #include "tree-iterator.h"
221 #include "cpplib.h"
222 #include "mkdeps.h"
223 #include "incpath.h"
224 #include "libiberty.h"
225 #include "stor-layout.h"
226 #include "version.h"
227 #include "tree-diagnostic.h"
228 #include "toplev.h"
229 #include "opts.h"
230 #include "attribs.h"
231 #include "intl.h"
232 #include "langhooks.h"
233 /* This TU doesn't need or want to see the networking. */
234 #define CODY_NETWORKING 0
235 #include "mapper-client.h"
236
237 #if 0 // 1 for testing no mmap
238 #define MAPPED_READING 0
239 #define MAPPED_WRITING 0
240 #else
241 #if HAVE_MMAP_FILE && _POSIX_MAPPED_FILES > 0
242 /* mmap, munmap. */
243 #define MAPPED_READING 1
244 #if HAVE_SYSCONF && defined (_SC_PAGE_SIZE)
245 /* msync, sysconf (_SC_PAGE_SIZE), ftruncate */
246 /* posix_fallocate used if available. */
247 #define MAPPED_WRITING 1
248 #else
249 #define MAPPED_WRITING 0
250 #endif
251 #else
252 #define MAPPED_READING 0
253 #define MAPPED_WRITING 0
254 #endif
255 #endif
256
257 /* Some open(2) flag differences, what a colourful world it is! */
258 #if defined (O_CLOEXEC)
259 // OK
260 #elif defined (_O_NOINHERIT)
261 /* Windows' _O_NOINHERIT matches O_CLOEXEC flag */
262 #define O_CLOEXEC _O_NOINHERIT
263 #else
264 #define O_CLOEXEC 0
265 #endif
266 #if defined (O_BINARY)
267 // Ok?
268 #elif defined (_O_BINARY)
269 /* Windows' open(2) call defaults to text! */
270 #define O_BINARY _O_BINARY
271 #else
272 #define O_BINARY 0
273 #endif
274
275 static inline cpp_hashnode *cpp_node (tree id)
276 {
277 return CPP_HASHNODE (GCC_IDENT_TO_HT_IDENT (id));
278 }
279
280 static inline tree identifier (const cpp_hashnode *node)
281 {
282 return HT_IDENT_TO_GCC_IDENT (HT_NODE (const_cast<cpp_hashnode *> (node)));
283 }
284
285 /* During duplicate detection we need to tell some comparators that
286 these are equivalent. */
287 tree map_context_from;
288 tree map_context_to;
289
290 /* Id for dumping module information. */
291 int module_dump_id;
292
293 /* We have a special module owner. */
294 #define MODULE_UNKNOWN (~0U) /* Not yet known. */
295
296 /* Prefix for section names. */
297 #define MOD_SNAME_PFX ".gnu.c++"
298
299 /* Format a version for user consumption. */
300
301 typedef char verstr_t[32];
302 static void
303 version2string (unsigned version, verstr_t &out)
304 {
305 unsigned major = MODULE_MAJOR (version);
306 unsigned minor = MODULE_MINOR (version);
307
308 if (IS_EXPERIMENTAL (version))
309 sprintf (out, "%04u/%02u/%02u-%02u:%02u%s",
310 2000 + major / 10000, (major / 100) % 100, (major % 100),
311 minor / 100, minor % 100,
312 EXPERIMENT ("", " (experimental)"));
313 else
314 sprintf (out, "%u.%u", major, minor);
315 }
316
317 /* Include files to note translation for. */
318 static vec<const char *, va_heap, vl_embed> *note_includes;
319
320 /* Traits to hash an arbitrary pointer. Entries are not deletable,
321 and removal is a noop (removal needed upon destruction). */
322 template <typename T>
323 struct nodel_ptr_hash : pointer_hash<T>, typed_noop_remove <T *> {
324 /* Nothing is deletable. Everything is insertable. */
325 static bool is_deleted (T *) { return false; }
326 static void mark_deleted (T *) { gcc_unreachable (); }
327 };
328
329 /* Map from pointer to signed integer. */
330 typedef simple_hashmap_traits<nodel_ptr_hash<void>, int> ptr_int_traits;
331 typedef hash_map<void *,signed,ptr_int_traits> ptr_int_hash_map;
332
333 /********************************************************************/
334 /* Basic streaming & ELF. Serialization is usually via mmap. For
335 writing we slide a buffer over the output file, syncing it
336 approproiately. For reading we simply map the whole file (as a
337 file-backed read-only map -- it's just address space, leaving the
338 OS pager to deal with getting the data to us). Some buffers need
339 to be more conventional malloc'd contents. */
340
341 /* Variable length buffer. */
342
343 class data {
344 public:
345 class allocator {
346 public:
347 /* Tools tend to moan if the dtor's not virtual. */
348 virtual ~allocator () {}
349
350 public:
351 void grow (data &obj, unsigned needed, bool exact);
352 void shrink (data &obj);
353
354 public:
355 virtual char *grow (char *ptr, unsigned needed);
356 virtual void shrink (char *ptr);
357 };
358
359 public:
360 char *buffer; /* Buffer being transferred. */
361 /* Although size_t would be the usual size, we know we never get
362 more than 4GB of buffer -- because that's the limit of the
363 encapsulation format. And if you need bigger imports, you're
364 doing it wrong. */
365 unsigned size; /* Allocated size of buffer. */
366 unsigned pos; /* Position in buffer. */
367
368 public:
369 data ()
370 :buffer (NULL), size (0), pos (0)
371 {
372 }
373 ~data ()
374 {
375 /* Make sure the derived and/or using class know what they're
376 doing. */
377 gcc_checking_assert (!buffer);
378 }
379
380 protected:
381 char *use (unsigned count)
382 {
383 if (size < pos + count)
384 return NULL;
385 char *res = &buffer[pos];
386 pos += count;
387 return res;
388 }
389
390 public:
391 void unuse (unsigned count)
392 {
393 pos -= count;
394 }
395
396 public:
397 static allocator simple_memory;
398 };
399
400 /* The simple data allocator. */
401 data::allocator data::simple_memory;
402
403 /* Grow buffer to at least size NEEDED. */
404
405 void
406 data::allocator::grow (data &obj, unsigned needed, bool exact)
407 {
408 gcc_checking_assert (needed ? needed > obj.size : !obj.size);
409 if (!needed)
410 /* Pick a default size. */
411 needed = EXPERIMENT (100, 1000);
412
413 if (!exact)
414 needed *= 2;
415 obj.buffer = grow (obj.buffer, needed);
416 if (obj.buffer)
417 obj.size = needed;
418 else
419 obj.pos = obj.size = 0;
420 }
421
422 /* Free a buffer. */
423
424 void
425 data::allocator::shrink (data &obj)
426 {
427 shrink (obj.buffer);
428 obj.buffer = NULL;
429 obj.size = 0;
430 }
431
432 char *
433 data::allocator::grow (char *ptr, unsigned needed)
434 {
435 return XRESIZEVAR (char, ptr, needed);
436 }
437
438 void
439 data::allocator::shrink (char *ptr)
440 {
441 XDELETEVEC (ptr);
442 }
443
444 /* Byte streamer base. Buffer with read/write position and smarts
445 for single bits. */
446
447 class bytes : public data {
448 public:
449 typedef data parent;
450
451 protected:
452 uint32_t bit_val; /* Bit buffer. */
453 unsigned bit_pos; /* Next bit in bit buffer. */
454
455 public:
456 bytes ()
457 :parent (), bit_val (0), bit_pos (0)
458 {}
459 ~bytes ()
460 {
461 }
462
463 protected:
464 unsigned calc_crc (unsigned) const;
465
466 protected:
467 /* Finish bit packet. Rewind the bytes not used. */
468 unsigned bit_flush ()
469 {
470 gcc_assert (bit_pos);
471 unsigned bytes = (bit_pos + 7) / 8;
472 unuse (4 - bytes);
473 bit_pos = 0;
474 bit_val = 0;
475 return bytes;
476 }
477 };
478
479 /* Calculate the crc32 of the buffer. Note the CRC is stored in the
480 first 4 bytes, so don't include them. */
481
482 unsigned
483 bytes::calc_crc (unsigned l) const
484 {
485 unsigned crc = 0;
486 for (size_t ix = 4; ix < l; ix++)
487 crc = crc32_byte (crc, buffer[ix]);
488 return crc;
489 }
490
491 class elf_in;
492
493 /* Byte stream reader. */
494
495 class bytes_in : public bytes {
496 typedef bytes parent;
497
498 protected:
499 bool overrun; /* Sticky read-too-much flag. */
500
501 public:
502 bytes_in ()
503 : parent (), overrun (false)
504 {
505 }
506 ~bytes_in ()
507 {
508 }
509
510 public:
511 /* Begin reading a named section. */
512 bool begin (location_t loc, elf_in *src, const char *name);
513 /* Begin reading a numbered section with optional name. */
514 bool begin (location_t loc, elf_in *src, unsigned, const char * = NULL);
515 /* Complete reading a buffer. Propagate errors and return true on
516 success. */
517 bool end (elf_in *src);
518 /* Return true if there is unread data. */
519 bool more_p () const
520 {
521 return pos != size;
522 }
523
524 public:
525 /* Start reading at OFFSET. */
526 void random_access (unsigned offset)
527 {
528 if (offset > size)
529 set_overrun ();
530 pos = offset;
531 bit_pos = bit_val = 0;
532 }
533
534 public:
535 void align (unsigned boundary)
536 {
537 if (unsigned pad = pos & (boundary - 1))
538 read (boundary - pad);
539 }
540
541 public:
542 const char *read (unsigned count)
543 {
544 char *ptr = use (count);
545 if (!ptr)
546 set_overrun ();
547 return ptr;
548 }
549
550 public:
551 bool check_crc () const;
552 /* We store the CRC in the first 4 bytes, using host endianness. */
553 unsigned get_crc () const
554 {
555 return *(const unsigned *)&buffer[0];
556 }
557
558 public:
559 /* Manipulate the overrun flag. */
560 bool get_overrun () const
561 {
562 return overrun;
563 }
564 void set_overrun ()
565 {
566 overrun = true;
567 }
568
569 public:
570 unsigned u32 (); /* Read uncompressed integer. */
571
572 public:
573 bool b (); /* Read a bool. */
574 void bflush (); /* Completed a block of bools. */
575
576 private:
577 void bfill (); /* Get the next block of bools. */
578
579 public:
580 int c (); /* Read a char. */
581 int i (); /* Read a signed int. */
582 unsigned u (); /* Read an unsigned int. */
583 size_t z (); /* Read a size_t. */
584 HOST_WIDE_INT wi (); /* Read a HOST_WIDE_INT. */
585 unsigned HOST_WIDE_INT wu (); /* Read an unsigned HOST_WIDE_INT. */
586 const char *str (size_t * = NULL); /* Read a string. */
587 const void *buf (size_t); /* Read a fixed-length buffer. */
588 cpp_hashnode *cpp_node (); /* Read a cpp node. */
589 };
590
591 /* Verify the buffer's CRC is correct. */
592
593 bool
594 bytes_in::check_crc () const
595 {
596 if (size < 4)
597 return false;
598
599 unsigned c_crc = calc_crc (size);
600 if (c_crc != get_crc ())
601 return false;
602
603 return true;
604 }
605
606 class elf_out;
607
608 /* Byte stream writer. */
609
610 class bytes_out : public bytes {
611 typedef bytes parent;
612
613 public:
614 allocator *memory; /* Obtainer of memory. */
615
616 public:
617 bytes_out (allocator *memory)
618 : parent (), memory (memory)
619 {
620 }
621 ~bytes_out ()
622 {
623 }
624
625 public:
626 bool streaming_p () const
627 {
628 return memory != NULL;
629 }
630
631 public:
632 void set_crc (unsigned *crc_ptr);
633
634 public:
635 /* Begin writing, maybe reserve space for CRC. */
636 void begin (bool need_crc = true);
637 /* Finish writing. Spill to section by number. */
638 unsigned end (elf_out *, unsigned, unsigned *crc_ptr = NULL);
639
640 public:
641 void align (unsigned boundary)
642 {
643 if (unsigned pad = pos & (boundary - 1))
644 write (boundary - pad);
645 }
646
647 public:
648 char *write (unsigned count, bool exact = false)
649 {
650 if (size < pos + count)
651 memory->grow (*this, pos + count, exact);
652 return use (count);
653 }
654
655 public:
656 void u32 (unsigned); /* Write uncompressed integer. */
657
658 public:
659 void b (bool); /* Write bool. */
660 void bflush (); /* Finish block of bools. */
661
662 public:
663 void c (unsigned char); /* Write unsigned char. */
664 void i (int); /* Write signed int. */
665 void u (unsigned); /* Write unsigned int. */
666 void z (size_t s); /* Write size_t. */
667 void wi (HOST_WIDE_INT); /* Write HOST_WIDE_INT. */
668 void wu (unsigned HOST_WIDE_INT); /* Write unsigned HOST_WIDE_INT. */
669 void str (const char *ptr)
670 {
671 str (ptr, strlen (ptr));
672 }
673 void cpp_node (const cpp_hashnode *node)
674 {
675 str ((const char *)NODE_NAME (node), NODE_LEN (node));
676 }
677 void str (const char *, size_t); /* Write string of known length. */
678 void buf (const void *, size_t); /* Write fixed length buffer. */
679 void *buf (size_t); /* Create a writable buffer */
680
681 public:
682 /* Format a NUL-terminated raw string. */
683 void printf (const char *, ...) ATTRIBUTE_PRINTF_2;
684 void print_time (const char *, const tm *, const char *);
685
686 public:
687 /* Dump instrumentation. */
688 static void instrument ();
689
690 protected:
691 /* Instrumentation. */
692 static unsigned spans[4];
693 static unsigned lengths[4];
694 static int is_set;
695 };
696
697 /* Instrumentation. */
698 unsigned bytes_out::spans[4];
699 unsigned bytes_out::lengths[4];
700 int bytes_out::is_set = -1;
701
702 /* If CRC_PTR non-null, set the CRC of the buffer. Mix the CRC into
703 that pointed to by CRC_PTR. */
704
705 void
706 bytes_out::set_crc (unsigned *crc_ptr)
707 {
708 if (crc_ptr)
709 {
710 gcc_checking_assert (pos >= 4);
711
712 unsigned crc = calc_crc (pos);
713 unsigned accum = *crc_ptr;
714 /* Only mix the existing *CRC_PTR if it is non-zero. */
715 accum = accum ? crc32_unsigned (accum, crc) : crc;
716 *crc_ptr = accum;
717
718 /* Buffer will be sufficiently aligned. */
719 *(unsigned *)buffer = crc;
720 }
721 }
722
723 /* Finish a set of bools. */
724
725 void
726 bytes_out::bflush ()
727 {
728 if (bit_pos)
729 {
730 u32 (bit_val);
731 lengths[2] += bit_flush ();
732 }
733 spans[2]++;
734 is_set = -1;
735 }
736
737 void
738 bytes_in::bflush ()
739 {
740 if (bit_pos)
741 bit_flush ();
742 }
743
744 /* When reading, we don't know how many bools we'll read in. So read
745 4 bytes-worth, and then rewind when flushing if we didn't need them
746 all. You can't have a block of bools closer than 4 bytes to the
747 end of the buffer. */
748
749 void
750 bytes_in::bfill ()
751 {
752 bit_val = u32 ();
753 }
754
755 /* Bools are packed into bytes. You cannot mix bools and non-bools.
756 You must call bflush before emitting another type. So batch your
757 bools.
758
759 It may be worth optimizing for most bools being zero. Some kind of
760 run-length encoding? */
761
762 void
763 bytes_out::b (bool x)
764 {
765 if (is_set != x)
766 {
767 is_set = x;
768 spans[x]++;
769 }
770 lengths[x]++;
771 bit_val |= unsigned (x) << bit_pos++;
772 if (bit_pos == 32)
773 {
774 u32 (bit_val);
775 lengths[2] += bit_flush ();
776 }
777 }
778
779 bool
780 bytes_in::b ()
781 {
782 if (!bit_pos)
783 bfill ();
784 bool v = (bit_val >> bit_pos++) & 1;
785 if (bit_pos == 32)
786 bit_flush ();
787 return v;
788 }
789
790 /* Exactly 4 bytes. Used internally for bool packing and a few other
791 places. We can't simply use uint32_t because (a) alignment and
792 (b) we need little-endian for the bool streaming rewinding to make
793 sense. */
794
795 void
796 bytes_out::u32 (unsigned val)
797 {
798 if (char *ptr = write (4))
799 {
800 ptr[0] = val;
801 ptr[1] = val >> 8;
802 ptr[2] = val >> 16;
803 ptr[3] = val >> 24;
804 }
805 }
806
807 unsigned
808 bytes_in::u32 ()
809 {
810 unsigned val = 0;
811 if (const char *ptr = read (4))
812 {
813 val |= (unsigned char)ptr[0];
814 val |= (unsigned char)ptr[1] << 8;
815 val |= (unsigned char)ptr[2] << 16;
816 val |= (unsigned char)ptr[3] << 24;
817 }
818
819 return val;
820 }
821
822 /* Chars are unsigned and written as single bytes. */
823
824 void
825 bytes_out::c (unsigned char v)
826 {
827 if (char *ptr = write (1))
828 *ptr = v;
829 }
830
831 int
832 bytes_in::c ()
833 {
834 int v = 0;
835 if (const char *ptr = read (1))
836 v = (unsigned char)ptr[0];
837 return v;
838 }
839
840 /* Ints 7-bit as a byte. Otherwise a 3bit count of following bytes in
841 big-endian form. 4 bits are in the first byte. */
842
843 void
844 bytes_out::i (int v)
845 {
846 if (char *ptr = write (1))
847 {
848 if (v <= 0x3f && v >= -0x40)
849 *ptr = v & 0x7f;
850 else
851 {
852 unsigned bytes = 0;
853 int probe;
854 if (v >= 0)
855 for (probe = v >> 8; probe > 0x7; probe >>= 8)
856 bytes++;
857 else
858 for (probe = v >> 8; probe < -0x8; probe >>= 8)
859 bytes++;
860 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
861 if ((ptr = write (++bytes)))
862 for (; bytes--; v >>= 8)
863 ptr[bytes] = v & 0xff;
864 }
865 }
866 }
867
868 int
869 bytes_in::i ()
870 {
871 int v = 0;
872 if (const char *ptr = read (1))
873 {
874 v = *ptr & 0xff;
875 if (v & 0x80)
876 {
877 unsigned bytes = (v >> 4) & 0x7;
878 v &= 0xf;
879 if (v & 0x8)
880 v |= -1 ^ 0x7;
881 /* unsigned necessary due to left shifts of -ve values. */
882 unsigned uv = unsigned (v);
883 if ((ptr = read (++bytes)))
884 while (bytes--)
885 uv = (uv << 8) | (*ptr++ & 0xff);
886 v = int (uv);
887 }
888 else if (v & 0x40)
889 v |= -1 ^ 0x3f;
890 }
891
892 return v;
893 }
894
895 void
896 bytes_out::u (unsigned v)
897 {
898 if (char *ptr = write (1))
899 {
900 if (v <= 0x7f)
901 *ptr = v;
902 else
903 {
904 unsigned bytes = 0;
905 unsigned probe;
906 for (probe = v >> 8; probe > 0xf; probe >>= 8)
907 bytes++;
908 *ptr = 0x80 | bytes << 4 | probe;
909 if ((ptr = write (++bytes)))
910 for (; bytes--; v >>= 8)
911 ptr[bytes] = v & 0xff;
912 }
913 }
914 }
915
916 unsigned
917 bytes_in::u ()
918 {
919 unsigned v = 0;
920
921 if (const char *ptr = read (1))
922 {
923 v = *ptr & 0xff;
924 if (v & 0x80)
925 {
926 unsigned bytes = (v >> 4) & 0x7;
927 v &= 0xf;
928 if ((ptr = read (++bytes)))
929 while (bytes--)
930 v = (v << 8) | (*ptr++ & 0xff);
931 }
932 }
933
934 return v;
935 }
936
937 void
938 bytes_out::wi (HOST_WIDE_INT v)
939 {
940 if (char *ptr = write (1))
941 {
942 if (v <= 0x3f && v >= -0x40)
943 *ptr = v & 0x7f;
944 else
945 {
946 unsigned bytes = 0;
947 HOST_WIDE_INT probe;
948 if (v >= 0)
949 for (probe = v >> 8; probe > 0x7; probe >>= 8)
950 bytes++;
951 else
952 for (probe = v >> 8; probe < -0x8; probe >>= 8)
953 bytes++;
954 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
955 if ((ptr = write (++bytes)))
956 for (; bytes--; v >>= 8)
957 ptr[bytes] = v & 0xff;
958 }
959 }
960 }
961
962 HOST_WIDE_INT
963 bytes_in::wi ()
964 {
965 HOST_WIDE_INT v = 0;
966 if (const char *ptr = read (1))
967 {
968 v = *ptr & 0xff;
969 if (v & 0x80)
970 {
971 unsigned bytes = (v >> 4) & 0x7;
972 v &= 0xf;
973 if (v & 0x8)
974 v |= -1 ^ 0x7;
975 /* unsigned necessary due to left shifts of -ve values. */
976 unsigned HOST_WIDE_INT uv = (unsigned HOST_WIDE_INT) v;
977 if ((ptr = read (++bytes)))
978 while (bytes--)
979 uv = (uv << 8) | (*ptr++ & 0xff);
980 v = (HOST_WIDE_INT) uv;
981 }
982 else if (v & 0x40)
983 v |= -1 ^ 0x3f;
984 }
985
986 return v;
987 }
988
989 /* unsigned wide ints are just written as signed wide ints. */
990
991 inline void
992 bytes_out::wu (unsigned HOST_WIDE_INT v)
993 {
994 wi ((HOST_WIDE_INT) v);
995 }
996
997 inline unsigned HOST_WIDE_INT
998 bytes_in::wu ()
999 {
1000 return (unsigned HOST_WIDE_INT) wi ();
1001 }
1002
1003 /* size_t written as unsigned or unsigned wide int. */
1004
1005 inline void
1006 bytes_out::z (size_t s)
1007 {
1008 if (sizeof (s) == sizeof (unsigned))
1009 u (s);
1010 else
1011 wu (s);
1012 }
1013
1014 inline size_t
1015 bytes_in::z ()
1016 {
1017 if (sizeof (size_t) == sizeof (unsigned))
1018 return u ();
1019 else
1020 return wu ();
1021 }
1022
1023 /* Buffer simply memcpied. */
1024 void *
1025 bytes_out::buf (size_t len)
1026 {
1027 align (sizeof (void *) * 2);
1028 return write (len);
1029 }
1030
1031 void
1032 bytes_out::buf (const void *src, size_t len)
1033 {
1034 if (void *ptr = buf (len))
1035 memcpy (ptr, src, len);
1036 }
1037
1038 const void *
1039 bytes_in::buf (size_t len)
1040 {
1041 align (sizeof (void *) * 2);
1042 const char *ptr = read (len);
1043
1044 return ptr;
1045 }
1046
1047 /* strings as an size_t length, followed by the buffer. Make sure
1048 there's a NUL terminator on read. */
1049
1050 void
1051 bytes_out::str (const char *string, size_t len)
1052 {
1053 z (len);
1054 if (len)
1055 {
1056 gcc_checking_assert (!string[len]);
1057 buf (string, len + 1);
1058 }
1059 }
1060
1061 const char *
1062 bytes_in::str (size_t *len_p)
1063 {
1064 size_t len = z ();
1065
1066 /* We're about to trust some user data. */
1067 if (overrun)
1068 len = 0;
1069 if (len_p)
1070 *len_p = len;
1071 const char *str = NULL;
1072 if (len)
1073 {
1074 str = reinterpret_cast<const char *> (buf (len + 1));
1075 if (!str || str[len])
1076 {
1077 set_overrun ();
1078 str = NULL;
1079 }
1080 }
1081 return str ? str : "";
1082 }
1083
1084 cpp_hashnode *
1085 bytes_in::cpp_node ()
1086 {
1087 size_t len;
1088 const char *s = str (&len);
1089 if (!len)
1090 return NULL;
1091 return ::cpp_node (get_identifier_with_length (s, len));
1092 }
1093
1094 /* Format a string directly to the buffer, including a terminating
1095 NUL. Intended for human consumption. */
1096
1097 void
1098 bytes_out::printf (const char *format, ...)
1099 {
1100 va_list args;
1101 /* Exercise buffer expansion. */
1102 size_t len = EXPERIMENT (10, 500);
1103
1104 while (char *ptr = write (len))
1105 {
1106 va_start (args, format);
1107 size_t actual = vsnprintf (ptr, len, format, args) + 1;
1108 va_end (args);
1109 if (actual <= len)
1110 {
1111 unuse (len - actual);
1112 break;
1113 }
1114 unuse (len);
1115 len = actual;
1116 }
1117 }
1118
1119 void
1120 bytes_out::print_time (const char *kind, const tm *time, const char *tz)
1121 {
1122 printf ("%stime: %4u/%02u/%02u %02u:%02u:%02u %s",
1123 kind, time->tm_year + 1900, time->tm_mon + 1, time->tm_mday,
1124 time->tm_hour, time->tm_min, time->tm_sec, tz);
1125 }
1126
1127 /* Encapsulated Lazy Records Of Named Declarations.
1128 Header: Stunningly Elf32_Ehdr-like
1129 Sections: Sectional data
1130 [1-N) : User data sections
1131 N .strtab : strings, stunningly ELF STRTAB-like
1132 Index: Section table, stunningly ELF32_Shdr-like. */
1133
1134 class elf {
1135 protected:
1136 /* Constants used within the format. */
1137 enum private_constants {
1138 /* File kind. */
1139 ET_NONE = 0,
1140 EM_NONE = 0,
1141 OSABI_NONE = 0,
1142
1143 /* File format. */
1144 EV_CURRENT = 1,
1145 CLASS32 = 1,
1146 DATA2LSB = 1,
1147 DATA2MSB = 2,
1148
1149 /* Section numbering. */
1150 SHN_UNDEF = 0,
1151 SHN_LORESERVE = 0xff00,
1152 SHN_XINDEX = 0xffff,
1153
1154 /* Section types. */
1155 SHT_NONE = 0, /* No contents. */
1156 SHT_PROGBITS = 1, /* Random bytes. */
1157 SHT_STRTAB = 3, /* A string table. */
1158
1159 /* Section flags. */
1160 SHF_NONE = 0x00, /* Nothing. */
1161 SHF_STRINGS = 0x20, /* NUL-Terminated strings. */
1162
1163 /* I really hope we do not get CMI files larger than 4GB. */
1164 MY_CLASS = CLASS32,
1165 /* It is host endianness that is relevant. */
1166 MY_ENDIAN = DATA2LSB
1167 #ifdef WORDS_BIGENDIAN
1168 ^ DATA2LSB ^ DATA2MSB
1169 #endif
1170 };
1171
1172 public:
1173 /* Constants visible to users. */
1174 enum public_constants {
1175 /* Special error codes. Breaking layering a bit. */
1176 E_BAD_DATA = -1, /* Random unexpected data errors. */
1177 E_BAD_LAZY = -2, /* Badly ordered laziness. */
1178 E_BAD_IMPORT = -3 /* A nested import failed. */
1179 };
1180
1181 protected:
1182 /* File identification. On-disk representation. */
1183 struct ident {
1184 uint8_t magic[4]; /* 0x7f, 'E', 'L', 'F' */
1185 uint8_t klass; /* 4:CLASS32 */
1186 uint8_t data; /* 5:DATA2[LM]SB */
1187 uint8_t version; /* 6:EV_CURRENT */
1188 uint8_t osabi; /* 7:OSABI_NONE */
1189 uint8_t abiver; /* 8: 0 */
1190 uint8_t pad[7]; /* 9-15 */
1191 };
1192 /* File header. On-disk representation. */
1193 struct header {
1194 struct ident ident;
1195 uint16_t type; /* ET_NONE */
1196 uint16_t machine; /* EM_NONE */
1197 uint32_t version; /* EV_CURRENT */
1198 uint32_t entry; /* 0 */
1199 uint32_t phoff; /* 0 */
1200 uint32_t shoff; /* Section Header Offset in file */
1201 uint32_t flags;
1202 uint16_t ehsize; /* ELROND Header SIZE -- sizeof (header) */
1203 uint16_t phentsize; /* 0 */
1204 uint16_t phnum; /* 0 */
1205 uint16_t shentsize; /* Section Header SIZE -- sizeof (section) */
1206 uint16_t shnum; /* Section Header NUM */
1207 uint16_t shstrndx; /* Section Header STRing iNDeX */
1208 };
1209 /* File section. On-disk representation. */
1210 struct section {
1211 uint32_t name; /* String table offset. */
1212 uint32_t type; /* SHT_* */
1213 uint32_t flags; /* SHF_* */
1214 uint32_t addr; /* 0 */
1215 uint32_t offset; /* OFFSET in file */
1216 uint32_t size; /* SIZE of section */
1217 uint32_t link; /* 0 */
1218 uint32_t info; /* 0 */
1219 uint32_t addralign; /* 0 */
1220 uint32_t entsize; /* ENTry SIZE, usually 0 */
1221 };
1222
1223 protected:
1224 data hdr; /* The header. */
1225 data sectab; /* The section table. */
1226 data strtab; /* String table. */
1227 int fd; /* File descriptor we're reading or writing. */
1228 int err; /* Sticky error code. */
1229
1230 public:
1231 /* Construct from STREAM. E is errno if STREAM NULL. */
1232 elf (int fd, int e)
1233 :hdr (), sectab (), strtab (), fd (fd), err (fd >= 0 ? 0 : e)
1234 {}
1235 ~elf ()
1236 {
1237 gcc_checking_assert (fd < 0 && !hdr.buffer
1238 && !sectab.buffer && !strtab.buffer);
1239 }
1240
1241 public:
1242 /* Return the error, if we have an error. */
1243 int get_error () const
1244 {
1245 return err;
1246 }
1247 /* Set the error, unless it's already been set. */
1248 void set_error (int e = E_BAD_DATA)
1249 {
1250 if (!err)
1251 err = e;
1252 }
1253 /* Get an error string. */
1254 const char *get_error (const char *) const;
1255
1256 public:
1257 /* Begin reading/writing file. Return false on error. */
1258 bool begin () const
1259 {
1260 return !get_error ();
1261 }
1262 /* Finish reading/writing file. Return false on error. */
1263 bool end ();
1264 };
1265
1266 /* Return error string. */
1267
1268 const char *
1269 elf::get_error (const char *name) const
1270 {
1271 if (!name)
1272 return "Unknown CMI mapping";
1273
1274 switch (err)
1275 {
1276 case 0:
1277 gcc_unreachable ();
1278 case E_BAD_DATA:
1279 return "Bad file data";
1280 case E_BAD_IMPORT:
1281 return "Bad import dependency";
1282 case E_BAD_LAZY:
1283 return "Bad lazy ordering";
1284 default:
1285 return xstrerror (err);
1286 }
1287 }
1288
1289 /* Finish file, return true if there's an error. */
1290
1291 bool
1292 elf::end ()
1293 {
1294 /* Close the stream and free the section table. */
1295 if (fd >= 0 && close (fd))
1296 set_error (errno);
1297 fd = -1;
1298
1299 return !get_error ();
1300 }
1301
1302 /* ELROND reader. */
1303
1304 class elf_in : public elf {
1305 typedef elf parent;
1306
1307 private:
1308 /* For freezing & defrosting. */
1309 #if !defined (HOST_LACKS_INODE_NUMBERS)
1310 dev_t device;
1311 ino_t inode;
1312 #endif
1313
1314 public:
1315 elf_in (int fd, int e)
1316 :parent (fd, e)
1317 {
1318 }
1319 ~elf_in ()
1320 {
1321 }
1322
1323 public:
1324 bool is_frozen () const
1325 {
1326 return fd < 0 && hdr.pos;
1327 }
1328 bool is_freezable () const
1329 {
1330 return fd >= 0 && hdr.pos;
1331 }
1332 void freeze ();
1333 bool defrost (const char *);
1334
1335 /* If BYTES is in the mmapped area, allocate a new buffer for it. */
1336 void preserve (bytes_in &bytes ATTRIBUTE_UNUSED)
1337 {
1338 #if MAPPED_READING
1339 if (hdr.buffer && bytes.buffer >= hdr.buffer
1340 && bytes.buffer < hdr.buffer + hdr.pos)
1341 {
1342 char *buf = bytes.buffer;
1343 bytes.buffer = data::simple_memory.grow (NULL, bytes.size);
1344 memcpy (bytes.buffer, buf, bytes.size);
1345 }
1346 #endif
1347 }
1348 /* If BYTES is not in SELF's mmapped area, free it. SELF might be
1349 NULL. */
1350 static void release (elf_in *self ATTRIBUTE_UNUSED, bytes_in &bytes)
1351 {
1352 #if MAPPED_READING
1353 if (!(self && self->hdr.buffer && bytes.buffer >= self->hdr.buffer
1354 && bytes.buffer < self->hdr.buffer + self->hdr.pos))
1355 #endif
1356 data::simple_memory.shrink (bytes.buffer);
1357 bytes.buffer = NULL;
1358 bytes.size = 0;
1359 }
1360
1361 public:
1362 static void grow (data &data, unsigned needed)
1363 {
1364 gcc_checking_assert (!data.buffer);
1365 #if !MAPPED_READING
1366 data.buffer = XNEWVEC (char, needed);
1367 #endif
1368 data.size = needed;
1369 }
1370 static void shrink (data &data)
1371 {
1372 #if !MAPPED_READING
1373 XDELETEVEC (data.buffer);
1374 #endif
1375 data.buffer = NULL;
1376 data.size = 0;
1377 }
1378
1379 public:
1380 const section *get_section (unsigned s) const
1381 {
1382 if (s * sizeof (section) < sectab.size)
1383 return reinterpret_cast<const section *>
1384 (&sectab.buffer[s * sizeof (section)]);
1385 else
1386 return NULL;
1387 }
1388 unsigned get_section_limit () const
1389 {
1390 return sectab.size / sizeof (section);
1391 }
1392
1393 protected:
1394 const char *read (data *, unsigned, unsigned);
1395
1396 public:
1397 /* Read section by number. */
1398 bool read (data *d, const section *s)
1399 {
1400 return s && read (d, s->offset, s->size);
1401 }
1402
1403 /* Find section by name. */
1404 unsigned find (const char *name);
1405 /* Find section by index. */
1406 const section *find (unsigned snum, unsigned type = SHT_PROGBITS);
1407
1408 public:
1409 /* Release the string table, when we're done with it. */
1410 void release ()
1411 {
1412 shrink (strtab);
1413 }
1414
1415 public:
1416 bool begin (location_t);
1417 bool end ()
1418 {
1419 release ();
1420 #if MAPPED_READING
1421 if (hdr.buffer)
1422 munmap (hdr.buffer, hdr.pos);
1423 hdr.buffer = NULL;
1424 #endif
1425 shrink (sectab);
1426
1427 return parent::end ();
1428 }
1429
1430 public:
1431 /* Return string name at OFFSET. Checks OFFSET range. Always
1432 returns non-NULL. We know offset 0 is an empty string. */
1433 const char *name (unsigned offset)
1434 {
1435 return &strtab.buffer[offset < strtab.size ? offset : 0];
1436 }
1437 };
1438
1439 /* ELROND writer. */
1440
1441 class elf_out : public elf, public data::allocator {
1442 typedef elf parent;
1443 /* Desired section alignment on disk. */
1444 static const int SECTION_ALIGN = 16;
1445
1446 private:
1447 ptr_int_hash_map identtab; /* Map of IDENTIFIERS to strtab offsets. */
1448 unsigned pos; /* Write position in file. */
1449 #if MAPPED_WRITING
1450 unsigned offset; /* Offset of the mapping. */
1451 unsigned extent; /* Length of mapping. */
1452 unsigned page_size; /* System page size. */
1453 #endif
1454
1455 public:
1456 elf_out (int fd, int e)
1457 :parent (fd, e), identtab (500), pos (0)
1458 {
1459 #if MAPPED_WRITING
1460 offset = extent = 0;
1461 page_size = sysconf (_SC_PAGE_SIZE);
1462 if (page_size < SECTION_ALIGN)
1463 /* Something really strange. */
1464 set_error (EINVAL);
1465 #endif
1466 }
1467 ~elf_out ()
1468 {
1469 data::simple_memory.shrink (hdr);
1470 data::simple_memory.shrink (sectab);
1471 data::simple_memory.shrink (strtab);
1472 }
1473
1474 #if MAPPED_WRITING
1475 private:
1476 void create_mapping (unsigned ext, bool extending = true);
1477 void remove_mapping ();
1478 #endif
1479
1480 protected:
1481 using allocator::grow;
1482 virtual char *grow (char *, unsigned needed);
1483 #if MAPPED_WRITING
1484 using allocator::shrink;
1485 virtual void shrink (char *);
1486 #endif
1487
1488 public:
1489 unsigned get_section_limit () const
1490 {
1491 return sectab.pos / sizeof (section);
1492 }
1493
1494 protected:
1495 unsigned add (unsigned type, unsigned name = 0,
1496 unsigned off = 0, unsigned size = 0, unsigned flags = SHF_NONE);
1497 unsigned write (const data &);
1498 #if MAPPED_WRITING
1499 unsigned write (const bytes_out &);
1500 #endif
1501
1502 public:
1503 /* IDENTIFIER to strtab offset. */
1504 unsigned name (tree ident);
1505 /* String literal to strtab offset. */
1506 unsigned name (const char *n);
1507 /* Qualified name of DECL to strtab offset. */
1508 unsigned qualified_name (tree decl, bool is_defn);
1509
1510 private:
1511 unsigned strtab_write (const char *s, unsigned l);
1512 void strtab_write (tree decl, int);
1513
1514 public:
1515 /* Add a section with contents or strings. */
1516 unsigned add (const bytes_out &, bool string_p, unsigned name);
1517
1518 public:
1519 /* Begin and end writing. */
1520 bool begin ();
1521 bool end ();
1522 };
1523
1524 /* Begin reading section NAME (of type PROGBITS) from SOURCE.
1525 Data always checked for CRC. */
1526
1527 bool
1528 bytes_in::begin (location_t loc, elf_in *source, const char *name)
1529 {
1530 unsigned snum = source->find (name);
1531
1532 return begin (loc, source, snum, name);
1533 }
1534
1535 /* Begin reading section numbered SNUM with NAME (may be NULL). */
1536
1537 bool
1538 bytes_in::begin (location_t loc, elf_in *source, unsigned snum, const char *name)
1539 {
1540 if (!source->read (this, source->find (snum))
1541 || !size || !check_crc ())
1542 {
1543 source->set_error (elf::E_BAD_DATA);
1544 source->shrink (*this);
1545 if (name)
1546 error_at (loc, "section %qs is missing or corrupted", name);
1547 else
1548 error_at (loc, "section #%u is missing or corrupted", snum);
1549 return false;
1550 }
1551 pos = 4;
1552 return true;
1553 }
1554
1555 /* Finish reading a section. */
1556
1557 bool
1558 bytes_in::end (elf_in *src)
1559 {
1560 if (more_p ())
1561 set_overrun ();
1562 if (overrun)
1563 src->set_error ();
1564
1565 src->shrink (*this);
1566
1567 return !overrun;
1568 }
1569
1570 /* Begin writing buffer. */
1571
1572 void
1573 bytes_out::begin (bool need_crc)
1574 {
1575 if (need_crc)
1576 pos = 4;
1577 memory->grow (*this, 0, false);
1578 }
1579
1580 /* Finish writing buffer. Stream out to SINK as named section NAME.
1581 Return section number or 0 on failure. If CRC_PTR is true, crc
1582 the data. Otherwise it is a string section. */
1583
1584 unsigned
1585 bytes_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
1586 {
1587 lengths[3] += pos;
1588 spans[3]++;
1589
1590 set_crc (crc_ptr);
1591 unsigned sec_num = sink->add (*this, !crc_ptr, name);
1592 memory->shrink (*this);
1593
1594 return sec_num;
1595 }
1596
1597 /* Close and open the file, without destroying it. */
1598
1599 void
1600 elf_in::freeze ()
1601 {
1602 gcc_checking_assert (!is_frozen ());
1603 #if MAPPED_READING
1604 if (munmap (hdr.buffer, hdr.pos) < 0)
1605 set_error (errno);
1606 #endif
1607 if (close (fd) < 0)
1608 set_error (errno);
1609 fd = -1;
1610 }
1611
1612 bool
1613 elf_in::defrost (const char *name)
1614 {
1615 gcc_checking_assert (is_frozen ());
1616 struct stat stat;
1617
1618 fd = open (name, O_RDONLY | O_CLOEXEC | O_BINARY);
1619 if (fd < 0 || fstat (fd, &stat) < 0)
1620 set_error (errno);
1621 else
1622 {
1623 bool ok = hdr.pos == unsigned (stat.st_size);
1624 #ifndef HOST_LACKS_INODE_NUMBERS
1625 if (device != stat.st_dev
1626 || inode != stat.st_ino)
1627 ok = false;
1628 #endif
1629 if (!ok)
1630 set_error (EMFILE);
1631 #if MAPPED_READING
1632 if (ok)
1633 {
1634 char *mapping = reinterpret_cast<char *>
1635 (mmap (NULL, hdr.pos, PROT_READ, MAP_SHARED, fd, 0));
1636 if (mapping == MAP_FAILED)
1637 fail:
1638 set_error (errno);
1639 else
1640 {
1641 if (madvise (mapping, hdr.pos, MADV_RANDOM))
1642 goto fail;
1643
1644 /* These buffers are never NULL in this case. */
1645 strtab.buffer = mapping + strtab.pos;
1646 sectab.buffer = mapping + sectab.pos;
1647 hdr.buffer = mapping;
1648 }
1649 }
1650 #endif
1651 }
1652
1653 return !get_error ();
1654 }
1655
1656 /* Read at current position into BUFFER. Return true on success. */
1657
1658 const char *
1659 elf_in::read (data *data, unsigned pos, unsigned length)
1660 {
1661 #if MAPPED_READING
1662 if (pos + length > hdr.pos)
1663 {
1664 set_error (EINVAL);
1665 return NULL;
1666 }
1667 #else
1668 if (pos != ~0u && lseek (fd, pos, SEEK_SET) < 0)
1669 {
1670 set_error (errno);
1671 return NULL;
1672 }
1673 #endif
1674 grow (*data, length);
1675 #if MAPPED_READING
1676 data->buffer = hdr.buffer + pos;
1677 #else
1678 if (::read (fd, data->buffer, data->size) != ssize_t (length))
1679 {
1680 set_error (errno);
1681 shrink (*data);
1682 return NULL;
1683 }
1684 #endif
1685
1686 return data->buffer;
1687 }
1688
1689 /* Read section SNUM of TYPE. Return section pointer or NULL on error. */
1690
1691 const elf::section *
1692 elf_in::find (unsigned snum, unsigned type)
1693 {
1694 const section *sec = get_section (snum);
1695 if (!snum || !sec || sec->type != type)
1696 return NULL;
1697 return sec;
1698 }
1699
1700 /* Find a section NAME and TYPE. Return section number, or zero on
1701 failure. */
1702
1703 unsigned
1704 elf_in::find (const char *sname)
1705 {
1706 for (unsigned pos = sectab.size; pos -= sizeof (section); )
1707 {
1708 const section *sec
1709 = reinterpret_cast<const section *> (&sectab.buffer[pos]);
1710
1711 if (0 == strcmp (sname, name (sec->name)))
1712 return pos / sizeof (section);
1713 }
1714
1715 return 0;
1716 }
1717
1718 /* Begin reading file. Verify header. Pull in section and string
1719 tables. Return true on success. */
1720
1721 bool
1722 elf_in::begin (location_t loc)
1723 {
1724 if (!parent::begin ())
1725 return false;
1726
1727 struct stat stat;
1728 unsigned size = 0;
1729 if (!fstat (fd, &stat))
1730 {
1731 #if !defined (HOST_LACKS_INODE_NUMBERS)
1732 device = stat.st_dev;
1733 inode = stat.st_ino;
1734 #endif
1735 /* Never generate files > 4GB, check we've not been given one. */
1736 if (stat.st_size == unsigned (stat.st_size))
1737 size = unsigned (stat.st_size);
1738 }
1739
1740 #if MAPPED_READING
1741 /* MAP_SHARED so that the file is backing store. If someone else
1742 concurrently writes it, they're wrong. */
1743 void *mapping = mmap (NULL, size, PROT_READ, MAP_SHARED, fd, 0);
1744 if (mapping == MAP_FAILED)
1745 {
1746 fail:
1747 set_error (errno);
1748 return false;
1749 }
1750 /* We'll be hopping over this randomly. Some systems declare the
1751 first parm as char *, and other declare it as void *. */
1752 if (madvise (reinterpret_cast <char *> (mapping), size, MADV_RANDOM))
1753 goto fail;
1754
1755 hdr.buffer = (char *)mapping;
1756 #else
1757 read (&hdr, 0, sizeof (header));
1758 #endif
1759 hdr.pos = size; /* Record size of the file. */
1760
1761 const header *h = reinterpret_cast<const header *> (hdr.buffer);
1762 if (!h)
1763 return false;
1764
1765 if (h->ident.magic[0] != 0x7f
1766 || h->ident.magic[1] != 'E'
1767 || h->ident.magic[2] != 'L'
1768 || h->ident.magic[3] != 'F')
1769 {
1770 error_at (loc, "not Encapsulated Lazy Records of Named Declarations");
1771 failed:
1772 shrink (hdr);
1773 return false;
1774 }
1775
1776 /* We expect a particular format -- the ELF is not intended to be
1777 distributable. */
1778 if (h->ident.klass != MY_CLASS
1779 || h->ident.data != MY_ENDIAN
1780 || h->ident.version != EV_CURRENT
1781 || h->type != ET_NONE
1782 || h->machine != EM_NONE
1783 || h->ident.osabi != OSABI_NONE)
1784 {
1785 error_at (loc, "unexpected encapsulation format or type");
1786 goto failed;
1787 }
1788
1789 int e = -1;
1790 if (!h->shoff || h->shentsize != sizeof (section))
1791 {
1792 malformed:
1793 set_error (e);
1794 error_at (loc, "encapsulation is malformed");
1795 goto failed;
1796 }
1797
1798 unsigned strndx = h->shstrndx;
1799 unsigned shnum = h->shnum;
1800 if (shnum == SHN_XINDEX)
1801 {
1802 if (!read (&sectab, h->shoff, sizeof (section)))
1803 {
1804 section_table_fail:
1805 e = errno;
1806 goto malformed;
1807 }
1808 shnum = get_section (0)->size;
1809 /* Freeing does mean we'll re-read it in the case we're not
1810 mapping, but this is going to be rare. */
1811 shrink (sectab);
1812 }
1813
1814 if (!shnum)
1815 goto malformed;
1816
1817 if (!read (&sectab, h->shoff, shnum * sizeof (section)))
1818 goto section_table_fail;
1819
1820 if (strndx == SHN_XINDEX)
1821 strndx = get_section (0)->link;
1822
1823 if (!read (&strtab, find (strndx, SHT_STRTAB)))
1824 goto malformed;
1825
1826 /* The string table should be at least one byte, with NUL chars
1827 at either end. */
1828 if (!(strtab.size && !strtab.buffer[0]
1829 && !strtab.buffer[strtab.size - 1]))
1830 goto malformed;
1831
1832 #if MAPPED_READING
1833 /* Record the offsets of the section and string tables. */
1834 sectab.pos = h->shoff;
1835 strtab.pos = shnum * sizeof (section);
1836 #else
1837 shrink (hdr);
1838 #endif
1839
1840 return true;
1841 }
1842
1843 /* Create a new mapping. */
1844
1845 #if MAPPED_WRITING
1846 void
1847 elf_out::create_mapping (unsigned ext, bool extending)
1848 {
1849 #ifndef HAVE_POSIX_FALLOCATE
1850 #define posix_fallocate(fd,off,len) ftruncate (fd, off + len)
1851 #endif
1852 void *mapping = MAP_FAILED;
1853 if (extending && ext < 1024 * 1024)
1854 {
1855 if (!posix_fallocate (fd, offset, ext * 2))
1856 mapping = mmap (NULL, ext * 2, PROT_READ | PROT_WRITE,
1857 MAP_SHARED, fd, offset);
1858 if (mapping != MAP_FAILED)
1859 ext *= 2;
1860 }
1861 if (mapping == MAP_FAILED)
1862 {
1863 if (!extending || !posix_fallocate (fd, offset, ext))
1864 mapping = mmap (NULL, ext, PROT_READ | PROT_WRITE,
1865 MAP_SHARED, fd, offset);
1866 if (mapping == MAP_FAILED)
1867 {
1868 set_error (errno);
1869 mapping = NULL;
1870 ext = 0;
1871 }
1872 }
1873 #undef posix_fallocate
1874 hdr.buffer = (char *)mapping;
1875 extent = ext;
1876 }
1877 #endif
1878
1879 /* Flush out the current mapping. */
1880
1881 #if MAPPED_WRITING
1882 void
1883 elf_out::remove_mapping ()
1884 {
1885 if (hdr.buffer)
1886 {
1887 /* MS_ASYNC dtrt with the removed mapping, including a
1888 subsequent overlapping remap. */
1889 if (msync (hdr.buffer, extent, MS_ASYNC)
1890 || munmap (hdr.buffer, extent))
1891 /* We're somewhat screwed at this point. */
1892 set_error (errno);
1893 }
1894
1895 hdr.buffer = NULL;
1896 }
1897 #endif
1898
1899 /* Grow a mapping of PTR to be NEEDED bytes long. This gets
1900 interesting if the new size grows the EXTENT. */
1901
1902 char *
1903 elf_out::grow (char *data, unsigned needed)
1904 {
1905 if (!data)
1906 {
1907 /* First allocation, check we're aligned. */
1908 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
1909 #if MAPPED_WRITING
1910 data = hdr.buffer + (pos - offset);
1911 #endif
1912 }
1913
1914 #if MAPPED_WRITING
1915 unsigned off = data - hdr.buffer;
1916 if (off + needed > extent)
1917 {
1918 /* We need to grow the mapping. */
1919 unsigned lwm = off & ~(page_size - 1);
1920 unsigned hwm = (off + needed + page_size - 1) & ~(page_size - 1);
1921
1922 gcc_checking_assert (hwm > extent);
1923
1924 remove_mapping ();
1925
1926 offset += lwm;
1927 create_mapping (extent < hwm - lwm ? hwm - lwm : extent);
1928
1929 data = hdr.buffer + (off - lwm);
1930 }
1931 #else
1932 data = allocator::grow (data, needed);
1933 #endif
1934
1935 return data;
1936 }
1937
1938 #if MAPPED_WRITING
1939 /* Shrinking is a NOP. */
1940 void
1941 elf_out::shrink (char *)
1942 {
1943 }
1944 #endif
1945
1946 /* Write S of length L to the strtab buffer. L must include the ending
1947 NUL, if that's what you want. */
1948
1949 unsigned
1950 elf_out::strtab_write (const char *s, unsigned l)
1951 {
1952 if (strtab.pos + l > strtab.size)
1953 data::simple_memory.grow (strtab, strtab.pos + l, false);
1954 memcpy (strtab.buffer + strtab.pos, s, l);
1955 unsigned res = strtab.pos;
1956 strtab.pos += l;
1957 return res;
1958 }
1959
1960 /* Write qualified name of decl. INNER >0 if this is a definition, <0
1961 if this is a qualifier of an outer name. */
1962
1963 void
1964 elf_out::strtab_write (tree decl, int inner)
1965 {
1966 tree ctx = CP_DECL_CONTEXT (decl);
1967 if (TYPE_P (ctx))
1968 ctx = TYPE_NAME (ctx);
1969 if (ctx != global_namespace)
1970 strtab_write (ctx, -1);
1971
1972 tree name = DECL_NAME (decl);
1973 if (!name)
1974 name = DECL_ASSEMBLER_NAME_RAW (decl);
1975 strtab_write (IDENTIFIER_POINTER (name), IDENTIFIER_LENGTH (name));
1976
1977 if (inner)
1978 strtab_write (&"::{}"[inner+1], 2);
1979 }
1980
1981 /* Map IDENTIFIER IDENT to strtab offset. Inserts into strtab if not
1982 already there. */
1983
1984 unsigned
1985 elf_out::name (tree ident)
1986 {
1987 unsigned res = 0;
1988 if (ident)
1989 {
1990 bool existed;
1991 int *slot = &identtab.get_or_insert (ident, &existed);
1992 if (!existed)
1993 *slot = strtab_write (IDENTIFIER_POINTER (ident),
1994 IDENTIFIER_LENGTH (ident) + 1);
1995 res = *slot;
1996 }
1997 return res;
1998 }
1999
2000 /* Map LITERAL to strtab offset. Does not detect duplicates and
2001 expects LITERAL to remain live until strtab is written out. */
2002
2003 unsigned
2004 elf_out::name (const char *literal)
2005 {
2006 return strtab_write (literal, strlen (literal) + 1);
2007 }
2008
2009 /* Map a DECL's qualified name to strtab offset. Does not detect
2010 duplicates. */
2011
2012 unsigned
2013 elf_out::qualified_name (tree decl, bool is_defn)
2014 {
2015 gcc_checking_assert (DECL_P (decl) && decl != global_namespace);
2016 unsigned result = strtab.pos;
2017
2018 strtab_write (decl, is_defn);
2019 strtab_write ("", 1);
2020
2021 return result;
2022 }
2023
2024 /* Add section to file. Return section number. TYPE & NAME identify
2025 the section. OFF and SIZE identify the file location of its
2026 data. FLAGS contains additional info. */
2027
2028 unsigned
2029 elf_out::add (unsigned type, unsigned name, unsigned off, unsigned size,
2030 unsigned flags)
2031 {
2032 gcc_checking_assert (!(off & (SECTION_ALIGN - 1)));
2033 if (sectab.pos + sizeof (section) > sectab.size)
2034 data::simple_memory.grow (sectab, sectab.pos + sizeof (section), false);
2035 section *sec = reinterpret_cast<section *> (sectab.buffer + sectab.pos);
2036 memset (sec, 0, sizeof (section));
2037 sec->type = type;
2038 sec->flags = flags;
2039 sec->name = name;
2040 sec->offset = off;
2041 sec->size = size;
2042 if (flags & SHF_STRINGS)
2043 sec->entsize = 1;
2044
2045 unsigned res = sectab.pos;
2046 sectab.pos += sizeof (section);
2047 return res / sizeof (section);
2048 }
2049
2050 /* Pad to the next alignment boundary, then write BUFFER to disk.
2051 Return the position of the start of the write, or zero on failure. */
2052
2053 unsigned
2054 elf_out::write (const data &buffer)
2055 {
2056 #if MAPPED_WRITING
2057 /* HDR is always mapped. */
2058 if (&buffer != &hdr)
2059 {
2060 bytes_out out (this);
2061 grow (out, buffer.pos, true);
2062 if (out.buffer)
2063 memcpy (out.buffer, buffer.buffer, buffer.pos);
2064 shrink (out);
2065 }
2066 else
2067 /* We should have been aligned during the first allocation. */
2068 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
2069 #else
2070 if (::write (fd, buffer.buffer, buffer.pos) != ssize_t (buffer.pos))
2071 {
2072 set_error (errno);
2073 return 0;
2074 }
2075 #endif
2076 unsigned res = pos;
2077 pos += buffer.pos;
2078
2079 if (unsigned padding = -pos & (SECTION_ALIGN - 1))
2080 {
2081 #if !MAPPED_WRITING
2082 /* Align the section on disk, should help the necessary copies.
2083 fseeking to extend is non-portable. */
2084 static char zero[SECTION_ALIGN];
2085 if (::write (fd, &zero, padding) != ssize_t (padding))
2086 set_error (errno);
2087 #endif
2088 pos += padding;
2089 }
2090 return res;
2091 }
2092
2093 /* Write a streaming buffer. It must be using us as an allocator. */
2094
2095 #if MAPPED_WRITING
2096 unsigned
2097 elf_out::write (const bytes_out &buf)
2098 {
2099 gcc_checking_assert (buf.memory == this);
2100 /* A directly mapped buffer. */
2101 gcc_checking_assert (buf.buffer - hdr.buffer >= 0
2102 && buf.buffer - hdr.buffer + buf.size <= extent);
2103 unsigned res = pos;
2104 pos += buf.pos;
2105
2106 /* Align up. We're not going to advance into the next page. */
2107 pos += -pos & (SECTION_ALIGN - 1);
2108
2109 return res;
2110 }
2111 #endif
2112
2113 /* Write data and add section. STRING_P is true for a string
2114 section, false for PROGBITS. NAME identifies the section (0 is the
2115 empty name). DATA is the contents. Return section number or 0 on
2116 failure (0 is the undef section). */
2117
2118 unsigned
2119 elf_out::add (const bytes_out &data, bool string_p, unsigned name)
2120 {
2121 unsigned off = write (data);
2122
2123 return add (string_p ? SHT_STRTAB : SHT_PROGBITS, name,
2124 off, data.pos, string_p ? SHF_STRINGS : SHF_NONE);
2125 }
2126
2127 /* Begin writing the file. Initialize the section table and write an
2128 empty header. Return false on failure. */
2129
2130 bool
2131 elf_out::begin ()
2132 {
2133 if (!parent::begin ())
2134 return false;
2135
2136 /* Let the allocators pick a default. */
2137 data::simple_memory.grow (strtab, 0, false);
2138 data::simple_memory.grow (sectab, 0, false);
2139
2140 /* The string table starts with an empty string. */
2141 name ("");
2142
2143 /* Create the UNDEF section. */
2144 add (SHT_NONE);
2145
2146 #if MAPPED_WRITING
2147 /* Start a mapping. */
2148 create_mapping (EXPERIMENT (page_size,
2149 (32767 + page_size) & ~(page_size - 1)));
2150 if (!hdr.buffer)
2151 return false;
2152 #endif
2153
2154 /* Write an empty header. */
2155 grow (hdr, sizeof (header), true);
2156 header *h = reinterpret_cast<header *> (hdr.buffer);
2157 memset (h, 0, sizeof (header));
2158 hdr.pos = hdr.size;
2159 write (hdr);
2160 return !get_error ();
2161 }
2162
2163 /* Finish writing the file. Write out the string & section tables.
2164 Fill in the header. Return true on error. */
2165
2166 bool
2167 elf_out::end ()
2168 {
2169 if (fd >= 0)
2170 {
2171 /* Write the string table. */
2172 unsigned strnam = name (".strtab");
2173 unsigned stroff = write (strtab);
2174 unsigned strndx = add (SHT_STRTAB, strnam, stroff, strtab.pos,
2175 SHF_STRINGS);
2176
2177 /* Store escape values in section[0]. */
2178 if (strndx >= SHN_LORESERVE)
2179 {
2180 reinterpret_cast<section *> (sectab.buffer)->link = strndx;
2181 strndx = SHN_XINDEX;
2182 }
2183 unsigned shnum = sectab.pos / sizeof (section);
2184 if (shnum >= SHN_LORESERVE)
2185 {
2186 reinterpret_cast<section *> (sectab.buffer)->size = shnum;
2187 shnum = SHN_XINDEX;
2188 }
2189
2190 unsigned shoff = write (sectab);
2191
2192 #if MAPPED_WRITING
2193 if (offset)
2194 {
2195 remove_mapping ();
2196 offset = 0;
2197 create_mapping ((sizeof (header) + page_size - 1) & ~(page_size - 1),
2198 false);
2199 }
2200 unsigned length = pos;
2201 #else
2202 if (lseek (fd, 0, SEEK_SET) < 0)
2203 set_error (errno);
2204 #endif
2205 /* Write header. */
2206 if (!get_error ())
2207 {
2208 /* Write the correct header now. */
2209 header *h = reinterpret_cast<header *> (hdr.buffer);
2210 h->ident.magic[0] = 0x7f;
2211 h->ident.magic[1] = 'E'; /* Elrond */
2212 h->ident.magic[2] = 'L'; /* is an */
2213 h->ident.magic[3] = 'F'; /* elf. */
2214 h->ident.klass = MY_CLASS;
2215 h->ident.data = MY_ENDIAN;
2216 h->ident.version = EV_CURRENT;
2217 h->ident.osabi = OSABI_NONE;
2218 h->type = ET_NONE;
2219 h->machine = EM_NONE;
2220 h->version = EV_CURRENT;
2221 h->shoff = shoff;
2222 h->ehsize = sizeof (header);
2223 h->shentsize = sizeof (section);
2224 h->shnum = shnum;
2225 h->shstrndx = strndx;
2226
2227 pos = 0;
2228 write (hdr);
2229 }
2230
2231 #if MAPPED_WRITING
2232 remove_mapping ();
2233 if (ftruncate (fd, length))
2234 set_error (errno);
2235 #endif
2236 }
2237
2238 data::simple_memory.shrink (sectab);
2239 data::simple_memory.shrink (strtab);
2240
2241 return parent::end ();
2242 }
2243
2244 /********************************************************************/
2245
2246 /* A dependency set. This is used during stream out to determine the
2247 connectivity of the graph. Every namespace-scope declaration that
2248 needs writing has a depset. The depset is filled with the (depsets
2249 of) declarations within this module that it references. For a
2250 declaration that'll generally be named types. For definitions
2251 it'll also be declarations in the body.
2252
2253 From that we can convert the graph to a DAG, via determining the
2254 Strongly Connected Clusters. Each cluster is streamed
2255 independently, and thus we achieve lazy loading.
2256
2257 Other decls that get a depset are namespaces themselves and
2258 unnameable declarations. */
2259
2260 class depset {
2261 private:
2262 tree entity; /* Entity, or containing namespace. */
2263 uintptr_t discriminator; /* Flags or identifier. */
2264
2265 public:
2266 /* The kinds of entity the depset could describe. The ordering is
2267 significant, see entity_kind_name. */
2268 enum entity_kind
2269 {
2270 EK_DECL, /* A decl. */
2271 EK_SPECIALIZATION, /* A specialization. */
2272 EK_PARTIAL, /* A partial specialization. */
2273 EK_USING, /* A using declaration (at namespace scope). */
2274 EK_NAMESPACE, /* A namespace. */
2275 EK_REDIRECT, /* Redirect to a template_decl. */
2276 EK_EXPLICIT_HWM,
2277 EK_BINDING = EK_EXPLICIT_HWM, /* Implicitly encoded. */
2278 EK_FOR_BINDING, /* A decl being inserted for a binding. */
2279 EK_INNER_DECL, /* A decl defined outside of it's imported
2280 context. */
2281 EK_DIRECT_HWM = EK_PARTIAL + 1,
2282
2283 EK_BITS = 3 /* Only need to encode below EK_EXPLICIT_HWM. */
2284 };
2285
2286 private:
2287 /* Placement of bit fields in discriminator. */
2288 enum disc_bits
2289 {
2290 DB_ZERO_BIT, /* Set to disambiguate identifier from flags */
2291 DB_SPECIAL_BIT, /* First dep slot is special. */
2292 DB_KIND_BIT, /* Kind of the entity. */
2293 DB_KIND_BITS = EK_BITS,
2294 DB_DEFN_BIT = DB_KIND_BIT + DB_KIND_BITS,
2295 DB_IS_MEMBER_BIT, /* Is an out-of-class member. */
2296 DB_IS_INTERNAL_BIT, /* It is an (erroneous)
2297 internal-linkage entity. */
2298 DB_REFS_INTERNAL_BIT, /* Refers to an internal-linkage
2299 entity. */
2300 DB_IMPORTED_BIT, /* An imported entity. */
2301 DB_UNREACHED_BIT, /* A yet-to-be reached entity. */
2302 DB_HIDDEN_BIT, /* A hidden binding. */
2303 /* The following bits are not independent, but enumerating them is
2304 awkward. */
2305 DB_ALIAS_TMPL_INST_BIT, /* An alias template instantiation. */
2306 DB_ALIAS_SPEC_BIT, /* Specialization of an alias template
2307 (in both spec tables). */
2308 DB_TYPE_SPEC_BIT, /* Specialization in the type table.
2309 */
2310 DB_FRIEND_SPEC_BIT, /* An instantiated template friend. */
2311 };
2312
2313 public:
2314 /* The first slot is special for EK_SPECIALIZATIONS it is a
2315 spec_entry pointer. It is not relevant for the SCC
2316 determination. */
2317 vec<depset *> deps; /* Depsets we reference. */
2318
2319 public:
2320 unsigned cluster; /* Strongly connected cluster, later entity number */
2321 unsigned section; /* Section written to. */
2322 /* During SCC construction, section is lowlink, until the depset is
2323 removed from the stack. See Tarjan algorithm for details. */
2324
2325 private:
2326 /* Construction via factories. Destruction via hash traits. */
2327 depset (tree entity);
2328 ~depset ();
2329
2330 public:
2331 static depset *make_binding (tree, tree);
2332 static depset *make_entity (tree, entity_kind, bool = false);
2333 /* Late setting a binding name -- /then/ insert into hash! */
2334 inline void set_binding_name (tree name)
2335 {
2336 gcc_checking_assert (!get_name ());
2337 discriminator = reinterpret_cast<uintptr_t> (name);
2338 }
2339
2340 private:
2341 template<unsigned I> void set_flag_bit ()
2342 {
2343 gcc_checking_assert (I < 2 || !is_binding ());
2344 discriminator |= 1u << I;
2345 }
2346 template<unsigned I> void clear_flag_bit ()
2347 {
2348 gcc_checking_assert (I < 2 || !is_binding ());
2349 discriminator &= ~(1u << I);
2350 }
2351 template<unsigned I> bool get_flag_bit () const
2352 {
2353 gcc_checking_assert (I < 2 || !is_binding ());
2354 return bool ((discriminator >> I) & 1);
2355 }
2356
2357 public:
2358 bool is_binding () const
2359 {
2360 return !get_flag_bit<DB_ZERO_BIT> ();
2361 }
2362 entity_kind get_entity_kind () const
2363 {
2364 if (is_binding ())
2365 return EK_BINDING;
2366 return entity_kind ((discriminator >> DB_KIND_BIT) & ((1u << EK_BITS) - 1));
2367 }
2368 const char *entity_kind_name () const;
2369
2370 public:
2371 bool has_defn () const
2372 {
2373 return get_flag_bit<DB_DEFN_BIT> ();
2374 }
2375
2376 public:
2377 bool is_member () const
2378 {
2379 return get_flag_bit<DB_IS_MEMBER_BIT> ();
2380 }
2381 public:
2382 bool is_internal () const
2383 {
2384 return get_flag_bit<DB_IS_INTERNAL_BIT> ();
2385 }
2386 bool refs_internal () const
2387 {
2388 return get_flag_bit<DB_REFS_INTERNAL_BIT> ();
2389 }
2390 bool is_import () const
2391 {
2392 return get_flag_bit<DB_IMPORTED_BIT> ();
2393 }
2394 bool is_unreached () const
2395 {
2396 return get_flag_bit<DB_UNREACHED_BIT> ();
2397 }
2398 bool is_alias_tmpl_inst () const
2399 {
2400 return get_flag_bit<DB_ALIAS_TMPL_INST_BIT> ();
2401 }
2402 bool is_alias () const
2403 {
2404 return get_flag_bit<DB_ALIAS_SPEC_BIT> ();
2405 }
2406 bool is_hidden () const
2407 {
2408 return get_flag_bit<DB_HIDDEN_BIT> ();
2409 }
2410 bool is_type_spec () const
2411 {
2412 return get_flag_bit<DB_TYPE_SPEC_BIT> ();
2413 }
2414 bool is_friend_spec () const
2415 {
2416 return get_flag_bit<DB_FRIEND_SPEC_BIT> ();
2417 }
2418
2419 public:
2420 /* We set these bit outside of depset. */
2421 void set_hidden_binding ()
2422 {
2423 set_flag_bit<DB_HIDDEN_BIT> ();
2424 }
2425 void clear_hidden_binding ()
2426 {
2427 clear_flag_bit<DB_HIDDEN_BIT> ();
2428 }
2429
2430 public:
2431 bool is_special () const
2432 {
2433 return get_flag_bit<DB_SPECIAL_BIT> ();
2434 }
2435 void set_special ()
2436 {
2437 set_flag_bit<DB_SPECIAL_BIT> ();
2438 }
2439
2440 public:
2441 tree get_entity () const
2442 {
2443 return entity;
2444 }
2445 tree get_name () const
2446 {
2447 gcc_checking_assert (is_binding ());
2448 return reinterpret_cast <tree> (discriminator);
2449 }
2450
2451 public:
2452 /* Traits for a hash table of pointers to bindings. */
2453 struct traits {
2454 /* Each entry is a pointer to a depset. */
2455 typedef depset *value_type;
2456 /* We lookup by container:maybe-identifier pair. */
2457 typedef std::pair<tree,tree> compare_type;
2458
2459 static const bool empty_zero_p = true;
2460
2461 /* hash and equality for compare_type. */
2462 inline static hashval_t hash (const compare_type &p)
2463 {
2464 hashval_t h = pointer_hash<tree_node>::hash (p.first);
2465 if (p.second)
2466 {
2467 hashval_t nh = IDENTIFIER_HASH_VALUE (p.second);
2468 h = iterative_hash_hashval_t (h, nh);
2469 }
2470 return h;
2471 }
2472 inline static bool equal (const value_type b, const compare_type &p)
2473 {
2474 if (b->entity != p.first)
2475 return false;
2476
2477 if (p.second)
2478 return b->discriminator == reinterpret_cast<uintptr_t> (p.second);
2479 else
2480 return !b->is_binding ();
2481 }
2482
2483 /* (re)hasher for a binding itself. */
2484 inline static hashval_t hash (const value_type b)
2485 {
2486 hashval_t h = pointer_hash<tree_node>::hash (b->entity);
2487 if (b->is_binding ())
2488 {
2489 hashval_t nh = IDENTIFIER_HASH_VALUE (b->get_name ());
2490 h = iterative_hash_hashval_t (h, nh);
2491 }
2492 return h;
2493 }
2494
2495 /* Empty via NULL. */
2496 static inline void mark_empty (value_type &p) {p = NULL;}
2497 static inline bool is_empty (value_type p) {return !p;}
2498
2499 /* Nothing is deletable. Everything is insertable. */
2500 static bool is_deleted (value_type) { return false; }
2501 static void mark_deleted (value_type) { gcc_unreachable (); }
2502
2503 /* We own the entities in the hash table. */
2504 static void remove (value_type p)
2505 {
2506 delete (p);
2507 }
2508 };
2509
2510 public:
2511 class hash : public hash_table<traits> {
2512 typedef traits::compare_type key_t;
2513 typedef hash_table<traits> parent;
2514
2515 public:
2516 vec<depset *> worklist; /* Worklist of decls to walk. */
2517 hash *chain; /* Original table. */
2518 depset *current; /* Current depset being depended. */
2519 unsigned section; /* When writing out, the section. */
2520 bool sneakoscope; /* Detecting dark magic (of a voldemort). */
2521 bool reached_unreached; /* We reached an unreached entity. */
2522
2523 public:
2524 hash (size_t size, hash *c = NULL)
2525 : parent (size), chain (c), current (NULL), section (0),
2526 sneakoscope (false), reached_unreached (false)
2527 {
2528 worklist.create (size);
2529 }
2530 ~hash ()
2531 {
2532 worklist.release ();
2533 }
2534
2535 public:
2536 bool is_key_order () const
2537 {
2538 return chain != NULL;
2539 }
2540
2541 private:
2542 depset **entity_slot (tree entity, bool = true);
2543 depset **binding_slot (tree ctx, tree name, bool = true);
2544 depset *maybe_add_declaration (tree decl);
2545
2546 public:
2547 depset *find_dependency (tree entity);
2548 depset *find_binding (tree ctx, tree name);
2549 depset *make_dependency (tree decl, entity_kind);
2550 void add_dependency (depset *);
2551
2552 public:
2553 void add_mergeable (depset *);
2554 depset *add_dependency (tree decl, entity_kind);
2555 void add_namespace_context (depset *, tree ns);
2556
2557 private:
2558 static bool add_binding_entity (tree, WMB_Flags, void *);
2559
2560 public:
2561 bool add_namespace_entities (tree ns, bitmap partitions);
2562 void add_specializations (bool decl_p);
2563 void add_partial_entities (vec<tree, va_gc> *);
2564 void add_class_entities (vec<tree, va_gc> *);
2565
2566 public:
2567 void find_dependencies (module_state *);
2568 bool finalize_dependencies ();
2569 vec<depset *> connect ();
2570 };
2571
2572 public:
2573 struct tarjan {
2574 vec<depset *> result;
2575 vec<depset *> stack;
2576 unsigned index;
2577
2578 tarjan (unsigned size)
2579 : index (0)
2580 {
2581 result.create (size);
2582 stack.create (50);
2583 }
2584 ~tarjan ()
2585 {
2586 gcc_assert (!stack.length ());
2587 stack.release ();
2588 }
2589
2590 public:
2591 void connect (depset *);
2592 };
2593 };
2594
2595 inline
2596 depset::depset (tree entity)
2597 :entity (entity), discriminator (0), cluster (0), section (0)
2598 {
2599 deps.create (0);
2600 }
2601
2602 inline
2603 depset::~depset ()
2604 {
2605 deps.release ();
2606 }
2607
2608 const char *
2609 depset::entity_kind_name () const
2610 {
2611 /* Same order as entity_kind. */
2612 static const char *const names[] =
2613 {"decl", "specialization", "partial", "using",
2614 "namespace", "redirect", "binding"};
2615 entity_kind kind = get_entity_kind ();
2616 gcc_checking_assert (kind < sizeof (names) / sizeof(names[0]));
2617 return names[kind];
2618 }
2619
2620 /* Create a depset for a namespace binding NS::NAME. */
2621
2622 depset *depset::make_binding (tree ns, tree name)
2623 {
2624 depset *binding = new depset (ns);
2625
2626 binding->discriminator = reinterpret_cast <uintptr_t> (name);
2627
2628 return binding;
2629 }
2630
2631 depset *depset::make_entity (tree entity, entity_kind ek, bool is_defn)
2632 {
2633 depset *r = new depset (entity);
2634
2635 r->discriminator = ((1 << DB_ZERO_BIT)
2636 | (ek << DB_KIND_BIT)
2637 | is_defn << DB_DEFN_BIT);
2638
2639 return r;
2640 }
2641
2642 /* Values keyed to some unsigned integer. This is not GTY'd, so if
2643 T is tree they must be reachable via some other path. */
2644
2645 template<typename T>
2646 class uintset {
2647 public:
2648 unsigned key; /* Entity index of the other entity. */
2649
2650 /* Payload. */
2651 unsigned allocp2 : 5; /* log(2) allocated pending */
2652 unsigned num : 27; /* Number of pending. */
2653
2654 /* Trailing array of values. */
2655 T values[1];
2656
2657 public:
2658 /* Even with ctors, we're very pod-like. */
2659 uintset (unsigned uid)
2660 : key (uid), allocp2 (0), num (0)
2661 {
2662 }
2663 /* Copy constructor, which is exciting because of the trailing
2664 array. */
2665 uintset (const uintset *from)
2666 {
2667 size_t size = (offsetof (uintset, values)
2668 + sizeof (uintset::values) * from->num);
2669 memmove (this, from, size);
2670 if (from->num)
2671 allocp2++;
2672 }
2673
2674 public:
2675 struct traits : delete_ptr_hash<uintset> {
2676 typedef unsigned compare_type;
2677 typedef typename delete_ptr_hash<uintset>::value_type value_type;
2678
2679 /* Hash and equality for compare_type. */
2680 inline static hashval_t hash (const compare_type k)
2681 {
2682 return hashval_t (k);
2683 }
2684 inline static hashval_t hash (const value_type v)
2685 {
2686 return hash (v->key);
2687 }
2688
2689 inline static bool equal (const value_type v, const compare_type k)
2690 {
2691 return v->key == k;
2692 }
2693 };
2694
2695 public:
2696 class hash : public hash_table<traits>
2697 {
2698 typedef typename traits::compare_type key_t;
2699 typedef hash_table<traits> parent;
2700
2701 public:
2702 hash (size_t size)
2703 : parent (size)
2704 {
2705 }
2706 ~hash ()
2707 {
2708 }
2709
2710 private:
2711 uintset **find_slot (key_t key, insert_option insert)
2712 {
2713 return this->find_slot_with_hash (key, traits::hash (key), insert);
2714 }
2715
2716 public:
2717 uintset *get (key_t key, bool extract = false);
2718 bool add (key_t key, T value);
2719 uintset *create (key_t key, unsigned num, T init = 0);
2720 };
2721 };
2722
2723 /* Add VALUE to KEY's uintset, creating it if necessary. Returns true
2724 if we created the uintset. */
2725
2726 template<typename T>
2727 bool
2728 uintset<T>::hash::add (typename uintset<T>::hash::key_t key, T value)
2729 {
2730 uintset **slot = this->find_slot (key, INSERT);
2731 uintset *set = *slot;
2732 bool is_new = !set;
2733
2734 if (is_new || set->num == (1u << set->allocp2))
2735 {
2736 if (set)
2737 {
2738 unsigned n = set->num * 2;
2739 size_t new_size = (offsetof (uintset, values)
2740 + sizeof (uintset (0u).values) * n);
2741 uintset *new_set = new (::operator new (new_size)) uintset (set);
2742 delete set;
2743 set = new_set;
2744 }
2745 else
2746 set = new (::operator new (sizeof (*set))) uintset (key);
2747 *slot = set;
2748 }
2749
2750 set->values[set->num++] = value;
2751
2752 return is_new;
2753 }
2754
2755 template<typename T>
2756 uintset<T> *
2757 uintset<T>::hash::create (typename uintset<T>::hash::key_t key, unsigned num,
2758 T init)
2759 {
2760 unsigned p2alloc = 0;
2761 for (unsigned v = num; v != 1; v = (v >> 1) | (v & 1))
2762 p2alloc++;
2763
2764 size_t new_size = (offsetof (uintset, values)
2765 + (sizeof (uintset (0u).values) << p2alloc));
2766 uintset *set = new (::operator new (new_size)) uintset (key);
2767 set->allocp2 = p2alloc;
2768 set->num = num;
2769 while (num--)
2770 set->values[num] = init;
2771
2772 uintset **slot = this->find_slot (key, INSERT);
2773 gcc_checking_assert (!*slot);
2774 *slot = set;
2775
2776 return set;
2777 }
2778
2779 /* Locate KEY's uintset, potentially removing it from the hash table */
2780
2781 template<typename T>
2782 uintset<T> *
2783 uintset<T>::hash::get (typename uintset<T>::hash::key_t key, bool extract)
2784 {
2785 uintset *res = NULL;
2786
2787 if (uintset **slot = this->find_slot (key, NO_INSERT))
2788 {
2789 res = *slot;
2790 if (extract)
2791 /* We need to remove the pendset without deleting it. */
2792 traits::mark_deleted (*slot);
2793 }
2794
2795 return res;
2796 }
2797
2798 /* Entities keyed to some other entity. When we load the other
2799 entity, we mark it in some way to indicate there are further
2800 entities to load when you start looking inside it. For instance
2801 template specializations are keyed to their most general template.
2802 When we instantiate that, we need to know all the partial
2803 specializations (to pick the right template), and all the known
2804 specializations (to avoid reinstantiating it, and/or whether it's
2805 extern). The values split into two ranges. If !MSB set, indices
2806 into the entity array. If MSB set, an indirection to another
2807 pendset. */
2808
2809 typedef uintset<unsigned> pendset;
2810 static pendset::hash *pending_table;
2811
2812 /* Some entities are attached to another entitity for ODR purposes.
2813 For example, at namespace scope, 'inline auto var = []{};', that
2814 lambda is attached to 'var', and follows its ODRness. */
2815 typedef uintset<tree> attachset;
2816 static attachset::hash *attached_table;
2817
2818 /********************************************************************/
2819 /* Tree streaming. The tree streaming is very specific to the tree
2820 structures themselves. A tag indicates the kind of tree being
2821 streamed. -ve tags indicate backreferences to already-streamed
2822 trees. Backreferences are auto-numbered. */
2823
2824 /* Tree tags. */
2825 enum tree_tag {
2826 tt_null, /* NULL_TREE. */
2827 tt_fixed, /* Fixed vector index. */
2828
2829 tt_node, /* By-value node. */
2830 tt_decl, /* By-value mergeable decl. */
2831 tt_tpl_parm, /* Template parm. */
2832
2833 /* The ordering of the following 4 is relied upon in
2834 trees_out::tree_node. */
2835 tt_id, /* Identifier node. */
2836 tt_conv_id, /* Conversion operator name. */
2837 tt_anon_id, /* Anonymous name. */
2838 tt_lambda_id, /* Lambda name. */
2839
2840 tt_typedef_type, /* A (possibly implicit) typedefed type. */
2841 tt_derived_type, /* A type derived from another type. */
2842 tt_variant_type, /* A variant of another type. */
2843
2844 tt_tinfo_var, /* Typeinfo object. */
2845 tt_tinfo_typedef, /* Typeinfo typedef. */
2846 tt_ptrmem_type, /* Pointer to member type. */
2847
2848 tt_parm, /* Function parameter or result. */
2849 tt_enum_value, /* An enum value. */
2850 tt_enum_decl, /* An enum decl. */
2851 tt_data_member, /* Data member/using-decl. */
2852
2853 tt_binfo, /* A BINFO. */
2854 tt_vtable, /* A vtable. */
2855 tt_thunk, /* A thunk. */
2856 tt_clone_ref,
2857
2858 tt_entity, /* A extra-cluster entity. */
2859
2860 tt_template, /* The TEMPLATE_RESULT of a template. */
2861 };
2862
2863 enum walk_kind {
2864 WK_none, /* No walk to do (a back- or fixed-ref happened). */
2865 WK_normal, /* Normal walk (by-name if possible). */
2866
2867 WK_value, /* By-value walk. */
2868 };
2869
2870 enum merge_kind
2871 {
2872 MK_unique, /* Known unique. */
2873 MK_named, /* Found by CTX, NAME + maybe_arg types etc. */
2874 MK_field, /* Found by CTX and index on TYPE_FIELDS */
2875 MK_vtable, /* Found by CTX and index on TYPE_VTABLES */
2876 MK_as_base, /* Found by CTX. */
2877
2878 MK_partial,
2879
2880 MK_enum, /* Found by CTX, & 1stMemberNAME. */
2881 MK_attached, /* Found by attachee & index. */
2882
2883 MK_friend_spec, /* Like named, but has a tmpl & args too. */
2884 MK_local_friend, /* Found by CTX, index. */
2885
2886 MK_indirect_lwm = MK_enum,
2887
2888 /* Template specialization kinds below. These are all found via
2889 primary template and specialization args. */
2890 MK_template_mask = 0x10, /* A template specialization. */
2891
2892 MK_tmpl_decl_mask = 0x4, /* In decl table. */
2893 MK_tmpl_alias_mask = 0x2, /* Also in type table */
2894
2895 MK_tmpl_tmpl_mask = 0x1, /* We want TEMPLATE_DECL. */
2896
2897 MK_type_spec = MK_template_mask,
2898 MK_type_tmpl_spec = MK_type_spec | MK_tmpl_tmpl_mask,
2899
2900 MK_decl_spec = MK_template_mask | MK_tmpl_decl_mask,
2901 MK_decl_tmpl_spec = MK_decl_spec | MK_tmpl_tmpl_mask,
2902
2903 MK_alias_spec = MK_decl_spec | MK_tmpl_alias_mask,
2904
2905 MK_hwm = 0x20
2906 };
2907 /* This is more than a debugging array. NULLs are used to determine
2908 an invalid merge_kind number. */
2909 static char const *const merge_kind_name[MK_hwm] =
2910 {
2911 "unique", "named", "field", "vtable", /* 0...3 */
2912 "asbase", "partial", "enum", "attached", /* 4...7 */
2913
2914 "friend spec", "local friend", NULL, NULL, /* 8...11 */
2915 NULL, NULL, NULL, NULL,
2916
2917 "type spec", "type tmpl spec", /* 16,17 type (template). */
2918 NULL, NULL,
2919
2920 "decl spec", "decl tmpl spec", /* 20,21 decl (template). */
2921 "alias spec", NULL, /* 22,23 alias. */
2922 NULL, NULL, NULL, NULL,
2923 NULL, NULL, NULL, NULL,
2924 };
2925
2926 /* Mergeable entity location data. */
2927 struct merge_key {
2928 cp_ref_qualifier ref_q : 2;
2929 unsigned index;
2930
2931 tree ret; /* Return type, if appropriate. */
2932 tree args; /* Arg types, if appropriate. */
2933
2934 tree constraints; /* Constraints. */
2935
2936 merge_key ()
2937 :ref_q (REF_QUAL_NONE), index (0),
2938 ret (NULL_TREE), args (NULL_TREE),
2939 constraints (NULL_TREE)
2940 {
2941 }
2942 };
2943
2944 struct duplicate_hash : nodel_ptr_hash<tree_node>
2945 {
2946 inline static hashval_t hash (value_type decl)
2947 {
2948 if (TREE_CODE (decl) == TREE_BINFO)
2949 decl = TYPE_NAME (BINFO_TYPE (decl));
2950 return hashval_t (DECL_UID (decl));
2951 }
2952 };
2953
2954 /* Hashmap of merged duplicates. Usually decls, but can contain
2955 BINFOs. */
2956 typedef hash_map<tree,uintptr_t,
2957 simple_hashmap_traits<duplicate_hash,uintptr_t> >
2958 duplicate_hash_map;
2959
2960 /* Tree stream reader. Note that reading a stream doesn't mark the
2961 read trees with TREE_VISITED. Thus it's quite safe to have
2962 multiple concurrent readers. Which is good, because lazy
2963 loading. */
2964 class trees_in : public bytes_in {
2965 typedef bytes_in parent;
2966
2967 private:
2968 module_state *state; /* Module being imported. */
2969 vec<tree> back_refs; /* Back references. */
2970 duplicate_hash_map *duplicates; /* Map from existings to duplicate. */
2971 vec<tree> post_decls; /* Decls to post process. */
2972 unsigned unused; /* Inhibit any interior TREE_USED
2973 marking. */
2974
2975 public:
2976 trees_in (module_state *);
2977 ~trees_in ();
2978
2979 public:
2980 int insert (tree);
2981 tree back_ref (int);
2982
2983 private:
2984 tree start (unsigned = 0);
2985
2986 public:
2987 /* Needed for binfo writing */
2988 bool core_bools (tree);
2989
2990 private:
2991 /* Stream tree_core, lang_decl_specific and lang_type_specific
2992 bits. */
2993 bool core_vals (tree);
2994 bool lang_type_bools (tree);
2995 bool lang_type_vals (tree);
2996 bool lang_decl_bools (tree);
2997 bool lang_decl_vals (tree);
2998 bool lang_vals (tree);
2999 bool tree_node_bools (tree);
3000 bool tree_node_vals (tree);
3001 tree tree_value ();
3002 tree decl_value ();
3003 tree tpl_parm_value ();
3004
3005 private:
3006 tree chained_decls (); /* Follow DECL_CHAIN. */
3007 vec<tree, va_heap> *vec_chained_decls ();
3008 vec<tree, va_gc> *tree_vec (); /* vec of tree. */
3009 vec<tree_pair_s, va_gc> *tree_pair_vec (); /* vec of tree_pair. */
3010 tree tree_list (bool has_purpose);
3011
3012 public:
3013 /* Read a tree node. */
3014 tree tree_node (bool is_use = false);
3015
3016 private:
3017 bool install_entity (tree decl);
3018 tree tpl_parms (unsigned &tpl_levels);
3019 bool tpl_parms_fini (tree decl, unsigned tpl_levels);
3020 bool tpl_header (tree decl, unsigned *tpl_levels);
3021 int fn_parms_init (tree);
3022 void fn_parms_fini (int tag, tree fn, tree existing, bool has_defn);
3023 unsigned add_indirect_tpl_parms (tree);
3024 public:
3025 bool add_indirects (tree);
3026
3027 public:
3028 /* Serialize various definitions. */
3029 bool read_definition (tree decl);
3030
3031 private:
3032 bool is_matching_decl (tree existing, tree decl);
3033 static bool install_implicit_member (tree decl);
3034 bool read_function_def (tree decl, tree maybe_template);
3035 bool read_var_def (tree decl, tree maybe_template);
3036 bool read_class_def (tree decl, tree maybe_template);
3037 bool read_enum_def (tree decl, tree maybe_template);
3038
3039 public:
3040 tree decl_container ();
3041 tree key_mergeable (int tag, merge_kind, tree decl, tree inner, tree type,
3042 tree container, bool is_mod);
3043 unsigned binfo_mergeable (tree *);
3044
3045 private:
3046 uintptr_t *find_duplicate (tree existing);
3047 void register_duplicate (tree decl, tree existing);
3048 /* Mark as an already diagnosed bad duplicate. */
3049 void unmatched_duplicate (tree existing)
3050 {
3051 *find_duplicate (existing) |= 1;
3052 }
3053
3054 public:
3055 bool is_duplicate (tree decl)
3056 {
3057 return find_duplicate (decl) != NULL;
3058 }
3059 tree maybe_duplicate (tree decl)
3060 {
3061 if (uintptr_t *dup = find_duplicate (decl))
3062 return reinterpret_cast<tree> (*dup & ~uintptr_t (1));
3063 return decl;
3064 }
3065 tree odr_duplicate (tree decl, bool has_defn);
3066
3067 public:
3068 /* Return the next decl to postprocess, or NULL. */
3069 tree post_process ()
3070 {
3071 return post_decls.length () ? post_decls.pop () : NULL_TREE;
3072 }
3073 private:
3074 /* Register DECL for postprocessing. */
3075 void post_process (tree decl)
3076 {
3077 post_decls.safe_push (decl);
3078 }
3079
3080 private:
3081 void assert_definition (tree, bool installing);
3082 };
3083
3084 trees_in::trees_in (module_state *state)
3085 :parent (), state (state), unused (0)
3086 {
3087 duplicates = NULL;
3088 back_refs.create (500);
3089 post_decls.create (0);
3090 }
3091
3092 trees_in::~trees_in ()
3093 {
3094 delete (duplicates);
3095 back_refs.release ();
3096 post_decls.release ();
3097 }
3098
3099 /* Tree stream writer. */
3100 class trees_out : public bytes_out {
3101 typedef bytes_out parent;
3102
3103 private:
3104 module_state *state; /* The module we are writing. */
3105 ptr_int_hash_map tree_map; /* Trees to references */
3106 depset::hash *dep_hash; /* Dependency table. */
3107 int ref_num; /* Back reference number. */
3108 unsigned section;
3109 #if CHECKING_P
3110 int importedness; /* Checker that imports not occurring
3111 inappropriately. */
3112 #endif
3113
3114 public:
3115 trees_out (allocator *, module_state *, depset::hash &deps, unsigned sec = 0);
3116 ~trees_out ();
3117
3118 private:
3119 void mark_trees ();
3120 void unmark_trees ();
3121
3122 public:
3123 /* Hey, let's ignore the well known STL iterator idiom. */
3124 void begin ();
3125 unsigned end (elf_out *sink, unsigned name, unsigned *crc_ptr);
3126 void end ();
3127
3128 public:
3129 enum tags
3130 {
3131 tag_backref = -1, /* Upper bound on the backrefs. */
3132 tag_value = 0, /* Write by value. */
3133 tag_fixed /* Lower bound on the fixed trees. */
3134 };
3135
3136 public:
3137 bool is_key_order () const
3138 {
3139 return dep_hash->is_key_order ();
3140 }
3141
3142 public:
3143 int insert (tree, walk_kind = WK_normal);
3144
3145 private:
3146 void start (tree, bool = false);
3147
3148 private:
3149 walk_kind ref_node (tree);
3150 public:
3151 int get_tag (tree);
3152 void set_importing (int i ATTRIBUTE_UNUSED)
3153 {
3154 #if CHECKING_P
3155 importedness = i;
3156 #endif
3157 }
3158
3159 private:
3160 void core_bools (tree);
3161 void core_vals (tree);
3162 void lang_type_bools (tree);
3163 void lang_type_vals (tree);
3164 void lang_decl_bools (tree);
3165 void lang_decl_vals (tree);
3166 void lang_vals (tree);
3167 void tree_node_bools (tree);
3168 void tree_node_vals (tree);
3169
3170 private:
3171 void chained_decls (tree);
3172 void vec_chained_decls (tree);
3173 void tree_vec (vec<tree, va_gc> *);
3174 void tree_pair_vec (vec<tree_pair_s, va_gc> *);
3175 void tree_list (tree, bool has_purpose);
3176
3177 public:
3178 /* Mark a node for by-value walking. */
3179 void mark_by_value (tree);
3180
3181 public:
3182 void tree_node (tree);
3183
3184 private:
3185 void install_entity (tree decl, depset *);
3186 void tpl_parms (tree parms, unsigned &tpl_levels);
3187 void tpl_parms_fini (tree decl, unsigned tpl_levels);
3188 void fn_parms_fini (tree) {}
3189 unsigned add_indirect_tpl_parms (tree);
3190 public:
3191 void add_indirects (tree);
3192 void fn_parms_init (tree);
3193 void tpl_header (tree decl, unsigned *tpl_levels);
3194
3195 public:
3196 merge_kind get_merge_kind (tree decl, depset *maybe_dep);
3197 tree decl_container (tree decl);
3198 void key_mergeable (int tag, merge_kind, tree decl, tree inner,
3199 tree container, depset *maybe_dep);
3200 void binfo_mergeable (tree binfo);
3201
3202 private:
3203 bool decl_node (tree, walk_kind ref);
3204 void type_node (tree);
3205 void tree_value (tree);
3206 void tpl_parm_value (tree);
3207
3208 public:
3209 void decl_value (tree, depset *);
3210
3211 public:
3212 /* Serialize various definitions. */
3213 void write_definition (tree decl);
3214 void mark_declaration (tree decl, bool do_defn);
3215
3216 private:
3217 void mark_function_def (tree decl);
3218 void mark_var_def (tree decl);
3219 void mark_class_def (tree decl);
3220 void mark_enum_def (tree decl);
3221 void mark_class_member (tree decl, bool do_defn = true);
3222 void mark_binfos (tree type);
3223
3224 private:
3225 void write_var_def (tree decl);
3226 void write_function_def (tree decl);
3227 void write_class_def (tree decl);
3228 void write_enum_def (tree decl);
3229
3230 private:
3231 static void assert_definition (tree);
3232
3233 public:
3234 static void instrument ();
3235
3236 private:
3237 /* Tree instrumentation. */
3238 static unsigned tree_val_count;
3239 static unsigned decl_val_count;
3240 static unsigned back_ref_count;
3241 static unsigned null_count;
3242 };
3243
3244 /* Instrumentation counters. */
3245 unsigned trees_out::tree_val_count;
3246 unsigned trees_out::decl_val_count;
3247 unsigned trees_out::back_ref_count;
3248 unsigned trees_out::null_count;
3249
3250 trees_out::trees_out (allocator *mem, module_state *state, depset::hash &deps,
3251 unsigned section)
3252 :parent (mem), state (state), tree_map (500),
3253 dep_hash (&deps), ref_num (0), section (section)
3254 {
3255 #if CHECKING_P
3256 importedness = 0;
3257 #endif
3258 }
3259
3260 trees_out::~trees_out ()
3261 {
3262 }
3263
3264 /********************************************************************/
3265 /* Location. We're aware of the line-map concept and reproduce it
3266 here. Each imported module allocates a contiguous span of ordinary
3267 maps, and of macro maps. adhoc maps are serialized by contents,
3268 not pre-allocated. The scattered linemaps of a module are
3269 coalesced when writing. */
3270
3271
3272 /* I use half-open [first,second) ranges. */
3273 typedef std::pair<unsigned,unsigned> range_t;
3274
3275 /* A range of locations. */
3276 typedef std::pair<location_t,location_t> loc_range_t;
3277
3278 /* Spans of the line maps that are occupied by this TU. I.e. not
3279 within imports. Only extended when in an interface unit.
3280 Interval zero corresponds to the forced header linemap(s). This
3281 is a singleton object. */
3282
3283 class loc_spans {
3284 public:
3285 /* An interval of line maps. The line maps here represent a contiguous
3286 non-imported range. */
3287 struct span {
3288 loc_range_t ordinary; /* Ordinary map location range. */
3289 loc_range_t macro; /* Macro map location range. */
3290 int ordinary_delta; /* Add to ordinary loc to get serialized loc. */
3291 int macro_delta; /* Likewise for macro loc. */
3292 };
3293
3294 private:
3295 vec<span> *spans;
3296
3297 public:
3298 loc_spans ()
3299 /* Do not preallocate spans, as that causes
3300 --enable-detailed-mem-stats problems. */
3301 : spans (nullptr)
3302 {
3303 }
3304 ~loc_spans ()
3305 {
3306 delete spans;
3307 }
3308
3309 public:
3310 span &operator[] (unsigned ix)
3311 {
3312 return (*spans)[ix];
3313 }
3314 unsigned length () const
3315 {
3316 return spans->length ();
3317 }
3318
3319 public:
3320 bool init_p () const
3321 {
3322 return spans != nullptr;
3323 }
3324 /* Initializer. */
3325 void init (const line_maps *lmaps, const line_map_ordinary *map);
3326
3327 /* Slightly skewed preprocessed files can cause us to miss an
3328 initialization in some places. Fallback initializer. */
3329 void maybe_init ()
3330 {
3331 if (!init_p ())
3332 init (line_table, nullptr);
3333 }
3334
3335 public:
3336 enum {
3337 SPAN_RESERVED = 0, /* Reserved (fixed) locations. */
3338 SPAN_FIRST = 1, /* LWM of locations to stream */
3339 SPAN_MAIN = 2 /* Main file and onwards. */
3340 };
3341
3342 public:
3343 location_t main_start () const
3344 {
3345 return (*spans)[SPAN_MAIN].ordinary.first;
3346 }
3347
3348 public:
3349 void open (location_t);
3350 void close ();
3351
3352 public:
3353 /* Propagate imported linemaps to us, if needed. */
3354 bool maybe_propagate (module_state *import, location_t loc);
3355
3356 public:
3357 const span *ordinary (location_t);
3358 const span *macro (location_t);
3359 };
3360
3361 static loc_spans spans;
3362
3363 /********************************************************************/
3364 /* Data needed by a module during the process of loading. */
3365 struct GTY(()) slurping {
3366
3367 /* Remap import's module numbering to our numbering. Values are
3368 shifted by 1. Bit0 encodes if the import is direct. */
3369 vec<unsigned, va_heap, vl_embed> *
3370 GTY((skip)) remap; /* Module owner remapping. */
3371
3372 elf_in *GTY((skip)) from; /* The elf loader. */
3373
3374 /* This map is only for header imports themselves -- the global
3375 headers bitmap hold it for the current TU. */
3376 bitmap headers; /* Transitive set of direct imports, including
3377 self. Used for macro visibility and
3378 priority. */
3379
3380 /* These objects point into the mmapped area, unless we're not doing
3381 that, or we got frozen or closed. In those cases they point to
3382 buffers we own. */
3383 bytes_in macro_defs; /* Macro definitions. */
3384 bytes_in macro_tbl; /* Macro table. */
3385
3386 /* Location remapping. first->ordinary, second->macro. */
3387 range_t GTY((skip)) loc_deltas;
3388
3389 unsigned current; /* Section currently being loaded. */
3390 unsigned remaining; /* Number of lazy sections yet to read. */
3391 unsigned lru; /* An LRU counter. */
3392
3393 public:
3394 slurping (elf_in *);
3395 ~slurping ();
3396
3397 public:
3398 /* Close the ELF file, if it's open. */
3399 void close ()
3400 {
3401 if (from)
3402 {
3403 from->end ();
3404 delete from;
3405 from = NULL;
3406 }
3407 }
3408
3409 public:
3410 void release_macros ();
3411
3412 public:
3413 void alloc_remap (unsigned size)
3414 {
3415 gcc_assert (!remap);
3416 vec_safe_reserve (remap, size);
3417 for (unsigned ix = size; ix--;)
3418 remap->quick_push (0);
3419 }
3420 unsigned remap_module (unsigned owner)
3421 {
3422 if (owner < remap->length ())
3423 return (*remap)[owner] >> 1;
3424 return 0;
3425 }
3426
3427 public:
3428 /* GC allocation. But we must explicitly delete it. */
3429 static void *operator new (size_t x)
3430 {
3431 return ggc_alloc_atomic (x);
3432 }
3433 static void operator delete (void *p)
3434 {
3435 ggc_free (p);
3436 }
3437 };
3438
3439 slurping::slurping (elf_in *from)
3440 : remap (NULL), from (from),
3441 headers (BITMAP_GGC_ALLOC ()), macro_defs (), macro_tbl (),
3442 loc_deltas (0, 0),
3443 current (~0u), remaining (0), lru (0)
3444 {
3445 }
3446
3447 slurping::~slurping ()
3448 {
3449 vec_free (remap);
3450 remap = NULL;
3451 release_macros ();
3452 close ();
3453 }
3454
3455 void slurping::release_macros ()
3456 {
3457 if (macro_defs.size)
3458 elf_in::release (from, macro_defs);
3459 if (macro_tbl.size)
3460 elf_in::release (from, macro_tbl);
3461 }
3462
3463 /* Information about location maps used during writing. */
3464
3465 struct location_map_info {
3466 range_t num_maps;
3467
3468 unsigned max_range;
3469 };
3470
3471 /* Flage for extensions that end up being streamed. */
3472
3473 enum streamed_extensions {
3474 SE_OPENMP = 1 << 0,
3475 SE_BITS = 1
3476 };
3477
3478 /********************************************************************/
3479 struct module_state_config;
3480
3481 /* Increasing levels of loadedness. */
3482 enum module_loadedness {
3483 ML_NONE, /* Not loaded. */
3484 ML_CONFIG, /* Config loaed. */
3485 ML_PREPROCESSOR, /* Preprocessor loaded. */
3486 ML_LANGUAGE, /* Language loaded. */
3487 };
3488
3489 /* Increasing levels of directness (toplevel) of import. */
3490 enum module_directness {
3491 MD_NONE, /* Not direct. */
3492 MD_PARTITION_DIRECT, /* Direct import of a partition. */
3493 MD_DIRECT, /* Direct import. */
3494 MD_PURVIEW_DIRECT, /* direct import in purview. */
3495 };
3496
3497 /* State of a particular module. */
3498
3499 class GTY((chain_next ("%h.parent"), for_user)) module_state {
3500 public:
3501 /* We always import & export ourselves. */
3502 bitmap imports; /* Transitive modules we're importing. */
3503 bitmap exports; /* Subset of that, that we're exporting. */
3504
3505 module_state *parent;
3506 tree name; /* Name of the module. */
3507
3508 slurping *slurp; /* Data for loading. */
3509
3510 const char *flatname; /* Flatname of module. */
3511 char *filename; /* CMI Filename */
3512
3513 /* Indices into the entity_ary. */
3514 unsigned entity_lwm;
3515 unsigned entity_num;
3516
3517 /* Location ranges for this module. adhoc-locs are decomposed, so
3518 don't have a range. */
3519 loc_range_t GTY((skip)) ordinary_locs;
3520 loc_range_t GTY((skip)) macro_locs;
3521
3522 /* LOC is first set too the importing location. When initially
3523 loaded it refers to a module loc whose parent is the importing
3524 location. */
3525 location_t loc; /* Location referring to module itself. */
3526 unsigned crc; /* CRC we saw reading it in. */
3527
3528 unsigned mod; /* Module owner number. */
3529 unsigned remap; /* Remapping during writing. */
3530
3531 unsigned short subst; /* Mangle subst if !0. */
3532
3533 /* How loaded this module is. */
3534 enum module_loadedness loadedness : 2;
3535
3536 bool module_p : 1; /* /The/ module of this TU. */
3537 bool header_p : 1; /* Is a header unit. */
3538 bool interface_p : 1; /* An interface. */
3539 bool partition_p : 1; /* A partition. */
3540
3541 /* How directly this module is imported. */
3542 enum module_directness directness : 2;
3543
3544 bool exported_p : 1; /* directness != MD_NONE && exported. */
3545 bool cmi_noted_p : 1; /* We've told the user about the CMI, don't
3546 do it again */
3547 bool call_init_p : 1; /* This module's global initializer needs
3548 calling. */
3549 /* Record extensions emitted or permitted. */
3550 unsigned extensions : SE_BITS;
3551 /* 12 bits used, 4 bits remain */
3552
3553 public:
3554 module_state (tree name, module_state *, bool);
3555 ~module_state ();
3556
3557 public:
3558 void release ()
3559 {
3560 imports = exports = NULL;
3561 slurped ();
3562 }
3563 void slurped ()
3564 {
3565 delete slurp;
3566 slurp = NULL;
3567 }
3568 elf_in *from () const
3569 {
3570 return slurp->from;
3571 }
3572
3573 public:
3574 /* Kind of this module. */
3575 bool is_module () const
3576 {
3577 return module_p;
3578 }
3579 bool is_header () const
3580 {
3581 return header_p;
3582 }
3583 bool is_interface () const
3584 {
3585 return interface_p;
3586 }
3587 bool is_partition () const
3588 {
3589 return partition_p;
3590 }
3591
3592 /* How this module is used in the current TU. */
3593 bool is_exported () const
3594 {
3595 return exported_p;
3596 }
3597 bool is_direct () const
3598 {
3599 return directness >= MD_DIRECT;
3600 }
3601 bool is_purview_direct () const
3602 {
3603 return directness == MD_PURVIEW_DIRECT;
3604 }
3605 bool is_partition_direct () const
3606 {
3607 return directness == MD_PARTITION_DIRECT;
3608 }
3609
3610 public:
3611 /* Is this not a real module? */
3612 bool is_rooted () const
3613 {
3614 return loc != UNKNOWN_LOCATION;
3615 }
3616
3617 public:
3618 bool check_not_purview (location_t loc);
3619
3620 public:
3621 void mangle (bool include_partition);
3622
3623 public:
3624 void set_import (module_state const *, bool is_export);
3625 void announce (const char *) const;
3626
3627 public:
3628 /* Read and write module. */
3629 void write (elf_out *to, cpp_reader *);
3630 bool read_initial (cpp_reader *);
3631 bool read_preprocessor (bool);
3632 bool read_language (bool);
3633
3634 public:
3635 /* Read a section. */
3636 bool load_section (unsigned snum, binding_slot *mslot);
3637 /* Lazily read a section. */
3638 bool lazy_load (unsigned index, binding_slot *mslot);
3639
3640 public:
3641 /* Juggle a limited number of file numbers. */
3642 static void freeze_an_elf ();
3643 bool maybe_defrost ();
3644
3645 public:
3646 void maybe_completed_reading ();
3647 bool check_read (bool outermost, bool ok);
3648
3649 private:
3650 /* The README, for human consumption. */
3651 void write_readme (elf_out *to, cpp_reader *,
3652 const char *dialect, unsigned extensions);
3653 void write_env (elf_out *to);
3654
3655 private:
3656 /* Import tables. */
3657 void write_imports (bytes_out &cfg, bool direct);
3658 unsigned read_imports (bytes_in &cfg, cpp_reader *, line_maps *maps);
3659
3660 private:
3661 void write_imports (elf_out *to, unsigned *crc_ptr);
3662 bool read_imports (cpp_reader *, line_maps *);
3663
3664 private:
3665 void write_partitions (elf_out *to, unsigned, unsigned *crc_ptr);
3666 bool read_partitions (unsigned);
3667
3668 private:
3669 void write_config (elf_out *to, struct module_state_config &, unsigned crc);
3670 bool read_config (struct module_state_config &);
3671 static void write_counts (elf_out *to, unsigned [], unsigned *crc_ptr);
3672 bool read_counts (unsigned []);
3673
3674 public:
3675 void note_cmi_name ();
3676
3677 private:
3678 static unsigned write_bindings (elf_out *to, vec<depset *> depsets,
3679 unsigned *crc_ptr);
3680 bool read_bindings (unsigned count, unsigned lwm, unsigned hwm);
3681
3682 static void write_namespace (bytes_out &sec, depset *ns_dep);
3683 tree read_namespace (bytes_in &sec);
3684
3685 void write_namespaces (elf_out *to, vec<depset *> spaces,
3686 unsigned, unsigned *crc_ptr);
3687 bool read_namespaces (unsigned);
3688
3689 unsigned write_cluster (elf_out *to, depset *depsets[], unsigned size,
3690 depset::hash &, unsigned *counts, unsigned *crc_ptr);
3691 bool read_cluster (unsigned snum);
3692
3693 private:
3694 unsigned write_inits (elf_out *to, depset::hash &, unsigned *crc_ptr);
3695 bool read_inits (unsigned count);
3696
3697 private:
3698 void write_pendings (elf_out *to, vec<depset *> depsets,
3699 depset::hash &, unsigned count, unsigned *crc_ptr);
3700 bool read_pendings (unsigned count);
3701
3702 private:
3703 void write_entities (elf_out *to, vec<depset *> depsets,
3704 unsigned count, unsigned *crc_ptr);
3705 bool read_entities (unsigned count, unsigned lwm, unsigned hwm);
3706
3707 private:
3708 location_map_info write_prepare_maps (module_state_config *);
3709 bool read_prepare_maps (const module_state_config *);
3710
3711 void write_ordinary_maps (elf_out *to, location_map_info &,
3712 module_state_config *, bool, unsigned *crc_ptr);
3713 bool read_ordinary_maps ();
3714 void write_macro_maps (elf_out *to, location_map_info &,
3715 module_state_config *, unsigned *crc_ptr);
3716 bool read_macro_maps ();
3717
3718 private:
3719 void write_define (bytes_out &, const cpp_macro *, bool located = true);
3720 cpp_macro *read_define (bytes_in &, cpp_reader *, bool located = true) const;
3721 unsigned write_macros (elf_out *to, cpp_reader *, unsigned *crc_ptr);
3722 bool read_macros ();
3723 void install_macros ();
3724
3725 public:
3726 void import_macros ();
3727
3728 public:
3729 static void undef_macro (cpp_reader *, location_t, cpp_hashnode *);
3730 static cpp_macro *deferred_macro (cpp_reader *, location_t, cpp_hashnode *);
3731
3732 public:
3733 static void write_location (bytes_out &, location_t);
3734 location_t read_location (bytes_in &) const;
3735
3736 public:
3737 void set_flatname ();
3738 const char *get_flatname () const
3739 {
3740 return flatname;
3741 }
3742 location_t imported_from () const;
3743
3744 public:
3745 void set_filename (const Cody::Packet &);
3746 bool do_import (cpp_reader *, bool outermost);
3747 };
3748
3749 /* Hash module state by name. This cannot be a member of
3750 module_state, because of GTY restrictions. We never delete from
3751 the hash table, but ggc_ptr_hash doesn't support that
3752 simplification. */
3753
3754 struct module_state_hash : ggc_ptr_hash<module_state> {
3755 typedef std::pair<tree,uintptr_t> compare_type; /* {name,parent} */
3756
3757 static inline hashval_t hash (const value_type m);
3758 static inline hashval_t hash (const compare_type &n);
3759 static inline bool equal (const value_type existing,
3760 const compare_type &candidate);
3761 };
3762
3763 module_state::module_state (tree name, module_state *parent, bool partition)
3764 : imports (BITMAP_GGC_ALLOC ()), exports (BITMAP_GGC_ALLOC ()),
3765 parent (parent), name (name), slurp (NULL),
3766 flatname (NULL), filename (NULL),
3767 entity_lwm (~0u >> 1), entity_num (0),
3768 ordinary_locs (0, 0), macro_locs (0, 0),
3769 loc (UNKNOWN_LOCATION),
3770 crc (0), mod (MODULE_UNKNOWN), remap (0), subst (0)
3771 {
3772 loadedness = ML_NONE;
3773
3774 module_p = header_p = interface_p = partition_p = false;
3775
3776 directness = MD_NONE;
3777 exported_p = false;
3778
3779 cmi_noted_p = false;
3780 call_init_p = false;
3781
3782 partition_p = partition;
3783
3784 extensions = 0;
3785 if (name && TREE_CODE (name) == STRING_CST)
3786 {
3787 header_p = true;
3788
3789 const char *string = TREE_STRING_POINTER (name);
3790 gcc_checking_assert (string[0] == '.'
3791 ? IS_DIR_SEPARATOR (string[1])
3792 : IS_ABSOLUTE_PATH (string));
3793 }
3794
3795 gcc_checking_assert (!(parent && header_p));
3796 }
3797
3798 module_state::~module_state ()
3799 {
3800 release ();
3801 }
3802
3803 /* Hash module state. */
3804 static hashval_t
3805 module_name_hash (const_tree name)
3806 {
3807 if (TREE_CODE (name) == STRING_CST)
3808 return htab_hash_string (TREE_STRING_POINTER (name));
3809 else
3810 return IDENTIFIER_HASH_VALUE (name);
3811 }
3812
3813 hashval_t
3814 module_state_hash::hash (const value_type m)
3815 {
3816 hashval_t ph = pointer_hash<void>::hash
3817 (reinterpret_cast<void *> (reinterpret_cast<uintptr_t> (m->parent)
3818 | m->is_partition ()));
3819 hashval_t nh = module_name_hash (m->name);
3820 return iterative_hash_hashval_t (ph, nh);
3821 }
3822
3823 /* Hash a name. */
3824 hashval_t
3825 module_state_hash::hash (const compare_type &c)
3826 {
3827 hashval_t ph = pointer_hash<void>::hash (reinterpret_cast<void *> (c.second));
3828 hashval_t nh = module_name_hash (c.first);
3829
3830 return iterative_hash_hashval_t (ph, nh);
3831 }
3832
3833 bool
3834 module_state_hash::equal (const value_type existing,
3835 const compare_type &candidate)
3836 {
3837 uintptr_t ep = (reinterpret_cast<uintptr_t> (existing->parent)
3838 | existing->is_partition ());
3839 if (ep != candidate.second)
3840 return false;
3841
3842 /* Identifier comparison is by pointer. If the string_csts happen
3843 to be the same object, then they're equal too. */
3844 if (existing->name == candidate.first)
3845 return true;
3846
3847 /* If neither are string csts, they can't be equal. */
3848 if (TREE_CODE (candidate.first) != STRING_CST
3849 || TREE_CODE (existing->name) != STRING_CST)
3850 return false;
3851
3852 /* String equality. */
3853 if (TREE_STRING_LENGTH (existing->name)
3854 == TREE_STRING_LENGTH (candidate.first)
3855 && !memcmp (TREE_STRING_POINTER (existing->name),
3856 TREE_STRING_POINTER (candidate.first),
3857 TREE_STRING_LENGTH (existing->name)))
3858 return true;
3859
3860 return false;
3861 }
3862
3863 /********************************************************************/
3864 /* Global state */
3865
3866 /* Mapper name. */
3867 static const char *module_mapper_name;
3868
3869 /* CMI repository path and workspace. */
3870 static char *cmi_repo;
3871 static size_t cmi_repo_length;
3872 static char *cmi_path;
3873 static size_t cmi_path_alloc;
3874
3875 /* Count of available and loaded clusters. */
3876 static unsigned available_clusters;
3877 static unsigned loaded_clusters;
3878
3879 /* What the current TU is. */
3880 unsigned module_kind;
3881
3882 /* Number of global init calls needed. */
3883 unsigned num_init_calls_needed = 0;
3884
3885 /* Global trees. */
3886 static const std::pair<tree *, unsigned> global_tree_arys[] =
3887 {
3888 std::pair<tree *, unsigned> (sizetype_tab, stk_type_kind_last),
3889 std::pair<tree *, unsigned> (integer_types, itk_none),
3890 std::pair<tree *, unsigned> (global_trees, TI_MODULE_HWM),
3891 std::pair<tree *, unsigned> (c_global_trees, CTI_MODULE_HWM),
3892 std::pair<tree *, unsigned> (cp_global_trees, CPTI_MODULE_HWM),
3893 std::pair<tree *, unsigned> (NULL, 0)
3894 };
3895 static GTY(()) vec<tree, va_gc> *fixed_trees;
3896 static unsigned global_crc;
3897
3898 /* Lazy loading can open many files concurrently, there are
3899 per-process limits on that. We pay attention to the process limit,
3900 and attempt to increase it when we run out. Otherwise we use an
3901 LRU scheme to figure out who to flush. Note that if the import
3902 graph /depth/ exceeds lazy_limit, we'll exceed the limit. */
3903 static unsigned lazy_lru; /* LRU counter. */
3904 static unsigned lazy_open; /* Number of open modules */
3905 static unsigned lazy_limit; /* Current limit of open modules. */
3906 static unsigned lazy_hard_limit; /* Hard limit on open modules. */
3907 /* Account for source, assembler and dump files & directory searches.
3908 We don't keep the source file's open, so we don't have to account
3909 for #include depth. I think dump files are opened and closed per
3910 pass, but ICBW. */
3911 #define LAZY_HEADROOM 15 /* File descriptor headroom. */
3912
3913 /* Vector of module state. Indexed by OWNER. Has at least 2 slots. */
3914 static GTY(()) vec<module_state *, va_gc> *modules;
3915
3916 /* Hash of module state, findable by {name, parent}. */
3917 static GTY(()) hash_table<module_state_hash> *modules_hash;
3918
3919 /* Map of imported entities. We map DECL_UID to index of entity
3920 vector. */
3921 typedef hash_map<unsigned/*UID*/, unsigned/*index*/,
3922 simple_hashmap_traits<int_hash<unsigned,0>, unsigned>
3923 > entity_map_t;
3924 static entity_map_t *entity_map;
3925 /* Doesn't need GTYing, because any tree referenced here is also
3926 findable by, symbol table, specialization table, return type of
3927 reachable function. */
3928 static vec<binding_slot, va_heap, vl_embed> *entity_ary;
3929
3930 /* Members entities of imported classes that are defined in this TU.
3931 These are where the entity's context is not from the current TU.
3932 We need to emit the definition (but not the enclosing class).
3933
3934 We could find these by walking ALL the imported classes that we
3935 could provide a member definition. But that's expensive,
3936 especially when you consider lazy implicit member declarations,
3937 which could be ANY imported class. */
3938 static GTY(()) vec<tree, va_gc> *class_members;
3939
3940 /* The same problem exists for class template partial
3941 specializations. Now that we have constraints, the invariant of
3942 expecting them in the instantiation table no longer holds. One of
3943 the constrained partial specializations will be there, but the
3944 others not so much. It's not even an unconstrained partial
3945 spacialization in the table :( so any partial template declaration
3946 is added to this list too. */
3947 static GTY(()) vec<tree, va_gc> *partial_specializations;
3948
3949 /********************************************************************/
3950
3951 /* Our module mapper (created lazily). */
3952 module_client *mapper;
3953
3954 static module_client *make_mapper (location_t loc);
3955 inline module_client *get_mapper (location_t loc)
3956 {
3957 auto *res = mapper;
3958 if (!res)
3959 res = make_mapper (loc);
3960 return res;
3961 }
3962
3963 /********************************************************************/
3964 static tree
3965 get_clone_target (tree decl)
3966 {
3967 tree target;
3968
3969 if (TREE_CODE (decl) == TEMPLATE_DECL)
3970 {
3971 tree res_orig = DECL_CLONED_FUNCTION (DECL_TEMPLATE_RESULT (decl));
3972
3973 target = DECL_TI_TEMPLATE (res_orig);
3974 }
3975 else
3976 target = DECL_CLONED_FUNCTION (decl);
3977
3978 gcc_checking_assert (DECL_MAYBE_IN_CHARGE_CDTOR_P (target));
3979
3980 return target;
3981 }
3982
3983 /* Like FOR_EACH_CLONE, but will walk cloned templates. */
3984 #define FOR_EVERY_CLONE(CLONE, FN) \
3985 if (!DECL_MAYBE_IN_CHARGE_CDTOR_P (FN)); \
3986 else \
3987 for (CLONE = DECL_CHAIN (FN); \
3988 CLONE && DECL_CLONED_FUNCTION_P (CLONE); \
3989 CLONE = DECL_CHAIN (CLONE))
3990
3991 /* It'd be nice if USE_TEMPLATE was a field of template_info
3992 (a) it'd solve the enum case dealt with below,
3993 (b) both class templates and decl templates would store this in the
3994 same place
3995 (c) this function wouldn't need the by-ref arg, which is annoying. */
3996
3997 static tree
3998 node_template_info (tree decl, int &use)
3999 {
4000 tree ti = NULL_TREE;
4001 int use_tpl = -1;
4002 if (DECL_IMPLICIT_TYPEDEF_P (decl))
4003 {
4004 tree type = TREE_TYPE (decl);
4005
4006 ti = TYPE_TEMPLATE_INFO (type);
4007 if (ti)
4008 {
4009 if (TYPE_LANG_SPECIFIC (type))
4010 use_tpl = CLASSTYPE_USE_TEMPLATE (type);
4011 else
4012 {
4013 /* An enum, where we don't explicitly encode use_tpl.
4014 If the containing context (a type or a function), is
4015 an ({im,ex}plicit) instantiation, then this is too.
4016 If it's a partial or explicit specialization, then
4017 this is not!. */
4018 tree ctx = CP_DECL_CONTEXT (decl);
4019 if (TYPE_P (ctx))
4020 ctx = TYPE_NAME (ctx);
4021 node_template_info (ctx, use);
4022 use_tpl = use != 2 ? use : 0;
4023 }
4024 }
4025 }
4026 else if (DECL_LANG_SPECIFIC (decl)
4027 && (TREE_CODE (decl) == VAR_DECL
4028 || TREE_CODE (decl) == TYPE_DECL
4029 || TREE_CODE (decl) == FUNCTION_DECL
4030 || TREE_CODE (decl) == FIELD_DECL
4031 || TREE_CODE (decl) == TEMPLATE_DECL))
4032 {
4033 use_tpl = DECL_USE_TEMPLATE (decl);
4034 ti = DECL_TEMPLATE_INFO (decl);
4035 }
4036
4037 use = use_tpl;
4038 return ti;
4039 }
4040
4041 /* Find the index in entity_ary for an imported DECL. It should
4042 always be there, but bugs can cause it to be missing, and that can
4043 crash the crash reporting -- let's not do that! When streaming
4044 out we place entities from this module there too -- with negated
4045 indices. */
4046
4047 static unsigned
4048 import_entity_index (tree decl, bool null_ok = false)
4049 {
4050 if (unsigned *slot = entity_map->get (DECL_UID (decl)))
4051 return *slot;
4052
4053 gcc_checking_assert (null_ok);
4054 return ~(~0u >> 1);
4055 }
4056
4057 /* Find the module for an imported entity at INDEX in the entity ary.
4058 There must be one. */
4059
4060 static module_state *
4061 import_entity_module (unsigned index)
4062 {
4063 if (index > ~(~0u >> 1))
4064 /* This is an index for an exported entity. */
4065 return (*modules)[0];
4066
4067 unsigned pos = 1;
4068 unsigned len = modules->length () - pos;
4069 while (len)
4070 {
4071 unsigned half = len / 2;
4072 module_state *probe = (*modules)[pos + half];
4073 if (index < probe->entity_lwm)
4074 len = half;
4075 else if (index < probe->entity_lwm + probe->entity_num)
4076 return probe;
4077 else
4078 {
4079 pos += half + 1;
4080 len = len - (half + 1);
4081 }
4082 }
4083 gcc_unreachable ();
4084 }
4085
4086
4087 /********************************************************************/
4088 /* A dumping machinery. */
4089
4090 class dumper {
4091 public:
4092 enum {
4093 LOCATION = TDF_LINENO, /* -lineno:Source location streaming. */
4094 DEPEND = TDF_GRAPH, /* -graph:Dependency graph construction. */
4095 CLUSTER = TDF_BLOCKS, /* -blocks:Clusters. */
4096 TREE = TDF_UID, /* -uid:Tree streaming. */
4097 MERGE = TDF_ALIAS, /* -alias:Mergeable Entities. */
4098 ELF = TDF_ASMNAME, /* -asmname:Elf data. */
4099 MACRO = TDF_VOPS /* -vops:Macros. */
4100 };
4101
4102 private:
4103 struct impl {
4104 typedef vec<module_state *, va_heap, vl_embed> stack_t;
4105
4106 FILE *stream; /* Dump stream. */
4107 unsigned indent; /* Local indentation. */
4108 bool bol; /* Beginning of line. */
4109 stack_t stack; /* Trailing array of module_state. */
4110
4111 bool nested_name (tree); /* Dump a name following DECL_CONTEXT. */
4112 };
4113
4114 public:
4115 /* The dumper. */
4116 impl *dumps;
4117 dump_flags_t flags;
4118
4119 public:
4120 /* Push/pop module state dumping. */
4121 unsigned push (module_state *);
4122 void pop (unsigned);
4123
4124 public:
4125 /* Change local indentation. */
4126 void indent ()
4127 {
4128 if (dumps)
4129 dumps->indent++;
4130 }
4131 void outdent ()
4132 {
4133 if (dumps)
4134 {
4135 gcc_checking_assert (dumps->indent);
4136 dumps->indent--;
4137 }
4138 }
4139
4140 public:
4141 /* Is dump enabled?. */
4142 bool operator () (int mask = 0)
4143 {
4144 if (!dumps || !dumps->stream)
4145 return false;
4146 if (mask && !(mask & flags))
4147 return false;
4148 return true;
4149 }
4150 /* Dump some information. */
4151 bool operator () (const char *, ...);
4152 };
4153
4154 /* The dumper. */
4155 static dumper dump = {0, dump_flags_t (0)};
4156
4157 /* Push to dumping M. Return previous indentation level. */
4158
4159 unsigned
4160 dumper::push (module_state *m)
4161 {
4162 FILE *stream = NULL;
4163 if (!dumps || !dumps->stack.length ())
4164 {
4165 stream = dump_begin (module_dump_id, &flags);
4166 if (!stream)
4167 return 0;
4168 }
4169
4170 if (!dumps || !dumps->stack.space (1))
4171 {
4172 /* Create or extend the dump implementor. */
4173 unsigned current = dumps ? dumps->stack.length () : 0;
4174 unsigned count = current ? current * 2 : EXPERIMENT (1, 20);
4175 size_t alloc = (offsetof (impl, stack)
4176 + impl::stack_t::embedded_size (count));
4177 dumps = XRESIZEVAR (impl, dumps, alloc);
4178 dumps->stack.embedded_init (count, current);
4179 }
4180 if (stream)
4181 dumps->stream = stream;
4182
4183 unsigned n = dumps->indent;
4184 dumps->indent = 0;
4185 dumps->bol = true;
4186 dumps->stack.quick_push (m);
4187 if (m)
4188 {
4189 module_state *from = NULL;
4190
4191 if (dumps->stack.length () > 1)
4192 from = dumps->stack[dumps->stack.length () - 2];
4193 else
4194 dump ("");
4195 dump (from ? "Starting module %M (from %M)"
4196 : "Starting module %M", m, from);
4197 }
4198
4199 return n;
4200 }
4201
4202 /* Pop from dumping. Restore indentation to N. */
4203
4204 void dumper::pop (unsigned n)
4205 {
4206 if (!dumps)
4207 return;
4208
4209 gcc_checking_assert (dump () && !dumps->indent);
4210 if (module_state *m = dumps->stack[dumps->stack.length () - 1])
4211 {
4212 module_state *from = (dumps->stack.length () > 1
4213 ? dumps->stack[dumps->stack.length () - 2] : NULL);
4214 dump (from ? "Finishing module %M (returning to %M)"
4215 : "Finishing module %M", m, from);
4216 }
4217 dumps->stack.pop ();
4218 dumps->indent = n;
4219 if (!dumps->stack.length ())
4220 {
4221 dump_end (module_dump_id, dumps->stream);
4222 dumps->stream = NULL;
4223 }
4224 }
4225
4226 /* Dump a nested name for arbitrary tree T. Sometimes it won't have a
4227 name. */
4228
4229 bool
4230 dumper::impl::nested_name (tree t)
4231 {
4232 tree ti = NULL_TREE;
4233 int origin = -1;
4234 tree name = NULL_TREE;
4235
4236 if (t && TREE_CODE (t) == TREE_BINFO)
4237 t = BINFO_TYPE (t);
4238
4239 if (t && TYPE_P (t))
4240 t = TYPE_NAME (t);
4241
4242 if (t && DECL_P (t))
4243 {
4244 if (t == global_namespace || DECL_TEMPLATE_PARM_P (t))
4245 ;
4246 else if (tree ctx = DECL_CONTEXT (t))
4247 if (TREE_CODE (ctx) == TRANSLATION_UNIT_DECL
4248 || nested_name (ctx))
4249 fputs ("::", stream);
4250
4251 int use_tpl;
4252 ti = node_template_info (t, use_tpl);
4253 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
4254 && (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == t))
4255 t = TI_TEMPLATE (ti);
4256 if (TREE_CODE (t) == TEMPLATE_DECL)
4257 fputs ("template ", stream);
4258
4259 if (DECL_LANG_SPECIFIC (t) && DECL_MODULE_IMPORT_P (t))
4260 {
4261 /* We need to be careful here, so as to not explode on
4262 inconsistent data -- we're probably debugging, because
4263 Something Is Wrong. */
4264 unsigned index = import_entity_index (t, true);
4265 if (!(index & ~(~0u >> 1)))
4266 origin = import_entity_module (index)->mod;
4267 else if (index > ~(~0u >> 1))
4268 /* An imported partition member that we're emitting. */
4269 origin = 0;
4270 else
4271 origin = -2;
4272 }
4273
4274 name = DECL_NAME (t) ? DECL_NAME (t)
4275 : HAS_DECL_ASSEMBLER_NAME_P (t) ? DECL_ASSEMBLER_NAME_RAW (t)
4276 : NULL_TREE;
4277 }
4278 else
4279 name = t;
4280
4281 if (name)
4282 switch (TREE_CODE (name))
4283 {
4284 default:
4285 fputs ("#unnamed#", stream);
4286 break;
4287
4288 case IDENTIFIER_NODE:
4289 fwrite (IDENTIFIER_POINTER (name), 1, IDENTIFIER_LENGTH (name), stream);
4290 break;
4291
4292 case INTEGER_CST:
4293 print_hex (wi::to_wide (name), stream);
4294 break;
4295
4296 case STRING_CST:
4297 /* If TREE_TYPE is NULL, this is a raw string. */
4298 fwrite (TREE_STRING_POINTER (name), 1,
4299 TREE_STRING_LENGTH (name) - (TREE_TYPE (name) != NULL_TREE),
4300 stream);
4301 break;
4302 }
4303 else
4304 fputs ("#null#", stream);
4305
4306 if (origin >= 0)
4307 {
4308 const module_state *module = (*modules)[origin];
4309 fprintf (stream, "@%s:%d", !module ? "" : !module->name ? "(unnamed)"
4310 : module->get_flatname (), origin);
4311 }
4312 else if (origin == -2)
4313 fprintf (stream, "@???");
4314
4315 if (ti)
4316 {
4317 tree args = INNERMOST_TEMPLATE_ARGS (TI_ARGS (ti));
4318 fputs ("<", stream);
4319 if (args)
4320 for (int ix = 0; ix != TREE_VEC_LENGTH (args); ix++)
4321 {
4322 if (ix)
4323 fputs (",", stream);
4324 nested_name (TREE_VEC_ELT (args, ix));
4325 }
4326 fputs (">", stream);
4327 }
4328
4329 return true;
4330 }
4331
4332 /* Formatted dumping. FORMAT begins with '+' do not emit a trailing
4333 new line. (Normally it is appended.)
4334 Escapes:
4335 %C - tree_code
4336 %I - identifier
4337 %M - module_state
4338 %N - name -- DECL_NAME
4339 %P - context:name pair
4340 %R - unsigned:unsigned ratio
4341 %S - symbol -- DECL_ASSEMBLER_NAME
4342 %U - long unsigned
4343 %V - version
4344 --- the following are printf-like, but without its flexibility
4345 %d - decimal int
4346 %p - pointer
4347 %s - string
4348 %u - unsigned int
4349 %x - hex int
4350
4351 We do not implement the printf modifiers. */
4352
4353 bool
4354 dumper::operator () (const char *format, ...)
4355 {
4356 if (!(*this) ())
4357 return false;
4358
4359 bool no_nl = format[0] == '+';
4360 format += no_nl;
4361
4362 if (dumps->bol)
4363 {
4364 /* Module import indent. */
4365 if (unsigned depth = dumps->stack.length () - 1)
4366 {
4367 const char *prefix = ">>>>";
4368 fprintf (dumps->stream, (depth <= strlen (prefix)
4369 ? &prefix[strlen (prefix) - depth]
4370 : ">.%d.>"), depth);
4371 }
4372
4373 /* Local indent. */
4374 if (unsigned indent = dumps->indent)
4375 {
4376 const char *prefix = " ";
4377 fprintf (dumps->stream, (indent <= strlen (prefix)
4378 ? &prefix[strlen (prefix) - indent]
4379 : " .%d. "), indent);
4380 }
4381 dumps->bol = false;
4382 }
4383
4384 va_list args;
4385 va_start (args, format);
4386 while (const char *esc = strchr (format, '%'))
4387 {
4388 fwrite (format, 1, (size_t)(esc - format), dumps->stream);
4389 format = ++esc;
4390 switch (*format++)
4391 {
4392 default:
4393 gcc_unreachable ();
4394
4395 case '%':
4396 fputc ('%', dumps->stream);
4397 break;
4398
4399 case 'C': /* Code */
4400 {
4401 tree_code code = (tree_code)va_arg (args, unsigned);
4402 fputs (get_tree_code_name (code), dumps->stream);
4403 }
4404 break;
4405
4406 case 'I': /* Identifier. */
4407 {
4408 tree t = va_arg (args, tree);
4409 dumps->nested_name (t);
4410 }
4411 break;
4412
4413 case 'M': /* Module. */
4414 {
4415 const char *str = "(none)";
4416 if (module_state *m = va_arg (args, module_state *))
4417 {
4418 if (!m->is_rooted ())
4419 str = "(detached)";
4420 else
4421 str = m->get_flatname ();
4422 }
4423 fputs (str, dumps->stream);
4424 }
4425 break;
4426
4427 case 'N': /* Name. */
4428 {
4429 tree t = va_arg (args, tree);
4430 if (t && TREE_CODE (t) == OVERLOAD)
4431 t = OVL_FIRST (t);
4432 fputc ('\'', dumps->stream);
4433 dumps->nested_name (t);
4434 fputc ('\'', dumps->stream);
4435 }
4436 break;
4437
4438 case 'P': /* Pair. */
4439 {
4440 tree ctx = va_arg (args, tree);
4441 tree name = va_arg (args, tree);
4442 fputc ('\'', dumps->stream);
4443 dumps->nested_name (ctx);
4444 if (ctx && ctx != global_namespace)
4445 fputs ("::", dumps->stream);
4446 dumps->nested_name (name);
4447 fputc ('\'', dumps->stream);
4448 }
4449 break;
4450
4451 case 'R': /* Ratio */
4452 {
4453 unsigned a = va_arg (args, unsigned);
4454 unsigned b = va_arg (args, unsigned);
4455 fprintf (dumps->stream, "%.1f", (float) a / (b + !b));
4456 }
4457 break;
4458
4459 case 'S': /* Symbol name */
4460 {
4461 tree t = va_arg (args, tree);
4462 if (t && TYPE_P (t))
4463 t = TYPE_NAME (t);
4464 if (t && HAS_DECL_ASSEMBLER_NAME_P (t)
4465 && DECL_ASSEMBLER_NAME_SET_P (t))
4466 {
4467 fputc ('(', dumps->stream);
4468 fputs (IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME (t)),
4469 dumps->stream);
4470 fputc (')', dumps->stream);
4471 }
4472 }
4473 break;
4474
4475 case 'U': /* long unsigned. */
4476 {
4477 unsigned long u = va_arg (args, unsigned long);
4478 fprintf (dumps->stream, "%lu", u);
4479 }
4480 break;
4481
4482 case 'V': /* Verson. */
4483 {
4484 unsigned v = va_arg (args, unsigned);
4485 verstr_t string;
4486
4487 version2string (v, string);
4488 fputs (string, dumps->stream);
4489 }
4490 break;
4491
4492 case 'c': /* Character. */
4493 {
4494 int c = va_arg (args, int);
4495 fputc (c, dumps->stream);
4496 }
4497 break;
4498
4499 case 'd': /* Decimal Int. */
4500 {
4501 int d = va_arg (args, int);
4502 fprintf (dumps->stream, "%d", d);
4503 }
4504 break;
4505
4506 case 'p': /* Pointer. */
4507 {
4508 void *p = va_arg (args, void *);
4509 fprintf (dumps->stream, "%p", p);
4510 }
4511 break;
4512
4513 case 's': /* String. */
4514 {
4515 const char *s = va_arg (args, char *);
4516 gcc_checking_assert (s);
4517 fputs (s, dumps->stream);
4518 }
4519 break;
4520
4521 case 'u': /* Unsigned. */
4522 {
4523 unsigned u = va_arg (args, unsigned);
4524 fprintf (dumps->stream, "%u", u);
4525 }
4526 break;
4527
4528 case 'x': /* Hex. */
4529 {
4530 unsigned x = va_arg (args, unsigned);
4531 fprintf (dumps->stream, "%x", x);
4532 }
4533 break;
4534 }
4535 }
4536 fputs (format, dumps->stream);
4537 va_end (args);
4538 if (!no_nl)
4539 {
4540 dumps->bol = true;
4541 fputc ('\n', dumps->stream);
4542 }
4543 return true;
4544 }
4545
4546 struct note_def_cache_hasher : ggc_cache_ptr_hash<tree_node>
4547 {
4548 static int keep_cache_entry (tree t)
4549 {
4550 if (!CHECKING_P)
4551 /* GTY is unfortunately not clever enough to conditionalize
4552 this. */
4553 gcc_unreachable ();
4554
4555 if (ggc_marked_p (t))
4556 return -1;
4557
4558 unsigned n = dump.push (NULL);
4559 /* This might or might not be an error. We should note its
4560 dropping whichever. */
4561 dump () && dump ("Dropping %N from note_defs table", t);
4562 dump.pop (n);
4563
4564 return 0;
4565 }
4566 };
4567
4568 /* We should stream each definition at most once.
4569 This needs to be a cache because there are cases where a definition
4570 ends up being not retained, and we need to drop those so we don't
4571 get confused if memory is reallocated. */
4572 typedef hash_table<note_def_cache_hasher> note_defs_table_t;
4573 static GTY((cache)) note_defs_table_t *note_defs;
4574
4575 void
4576 trees_in::assert_definition (tree decl ATTRIBUTE_UNUSED,
4577 bool installing ATTRIBUTE_UNUSED)
4578 {
4579 #if CHECKING_P
4580 tree *slot = note_defs->find_slot (decl, installing ? INSERT : NO_INSERT);
4581 if (installing)
4582 {
4583 /* We must be inserting for the first time. */
4584 gcc_assert (!*slot);
4585 *slot = decl;
4586 }
4587 else
4588 /* If this is not the mergeable entity, it should not be in the
4589 table. If it is a non-global-module mergeable entity, it
4590 should be in the table. Global module entities could have been
4591 defined textually in the current TU and so might or might not
4592 be present. */
4593 gcc_assert (!is_duplicate (decl)
4594 ? !slot
4595 : (slot
4596 || !DECL_LANG_SPECIFIC (decl)
4597 || !DECL_MODULE_PURVIEW_P (decl)
4598 || (!DECL_MODULE_IMPORT_P (decl)
4599 && header_module_p ())));
4600
4601 if (TREE_CODE (decl) == TEMPLATE_DECL)
4602 gcc_assert (!note_defs->find_slot (DECL_TEMPLATE_RESULT (decl), NO_INSERT));
4603 #endif
4604 }
4605
4606 void
4607 trees_out::assert_definition (tree decl ATTRIBUTE_UNUSED)
4608 {
4609 #if CHECKING_P
4610 tree *slot = note_defs->find_slot (decl, INSERT);
4611 gcc_assert (!*slot);
4612 *slot = decl;
4613 if (TREE_CODE (decl) == TEMPLATE_DECL)
4614 gcc_assert (!note_defs->find_slot (DECL_TEMPLATE_RESULT (decl), NO_INSERT));
4615 #endif
4616 }
4617
4618 /********************************************************************/
4619 static bool
4620 noisy_p ()
4621 {
4622 if (quiet_flag)
4623 return false;
4624
4625 pp_needs_newline (global_dc->printer) = true;
4626 diagnostic_set_last_function (global_dc, (diagnostic_info *) NULL);
4627
4628 return true;
4629 }
4630
4631 /* Set the cmi repo. Strip trailing '/', '.' becomes NULL. */
4632
4633 static void
4634 set_cmi_repo (const char *r)
4635 {
4636 XDELETEVEC (cmi_repo);
4637 XDELETEVEC (cmi_path);
4638 cmi_path_alloc = 0;
4639
4640 cmi_repo = NULL;
4641 cmi_repo_length = 0;
4642
4643 if (!r || !r[0])
4644 return;
4645
4646 size_t len = strlen (r);
4647 cmi_repo = XNEWVEC (char, len + 1);
4648 memcpy (cmi_repo, r, len + 1);
4649
4650 if (len > 1 && IS_DIR_SEPARATOR (cmi_repo[len-1]))
4651 len--;
4652 if (len == 1 && cmi_repo[0] == '.')
4653 len--;
4654 cmi_repo[len] = 0;
4655 cmi_repo_length = len;
4656 }
4657
4658 /* TO is a repo-relative name. Provide one that we may use from where
4659 we are. */
4660
4661 static const char *
4662 maybe_add_cmi_prefix (const char *to, size_t *len_p = NULL)
4663 {
4664 size_t len = len_p || cmi_repo_length ? strlen (to) : 0;
4665
4666 if (cmi_repo_length && !IS_ABSOLUTE_PATH (to))
4667 {
4668 if (cmi_path_alloc < cmi_repo_length + len + 2)
4669 {
4670 XDELETEVEC (cmi_path);
4671 cmi_path_alloc = cmi_repo_length + len * 2 + 2;
4672 cmi_path = XNEWVEC (char, cmi_path_alloc);
4673
4674 memcpy (cmi_path, cmi_repo, cmi_repo_length);
4675 cmi_path[cmi_repo_length] = DIR_SEPARATOR;
4676 }
4677
4678 memcpy (&cmi_path[cmi_repo_length + 1], to, len + 1);
4679 len += cmi_repo_length + 1;
4680 to = cmi_path;
4681 }
4682
4683 if (len_p)
4684 *len_p = len;
4685
4686 return to;
4687 }
4688
4689 /* Try and create the directories of PATH. */
4690
4691 static void
4692 create_dirs (char *path)
4693 {
4694 /* Try and create the missing directories. */
4695 for (char *base = path; *base; base++)
4696 if (IS_DIR_SEPARATOR (*base))
4697 {
4698 char sep = *base;
4699 *base = 0;
4700 int failed = mkdir (path, S_IRWXU | S_IRWXG | S_IRWXO);
4701 dump () && dump ("Mkdir ('%s') errno:=%u", path, failed ? errno : 0);
4702 *base = sep;
4703 if (failed
4704 /* Maybe racing with another creator (of a *different*
4705 module). */
4706 && errno != EEXIST)
4707 break;
4708 }
4709 }
4710
4711 /* Given a CLASSTYPE_DECL_LIST VALUE get the the template friend decl,
4712 if that's what this is. */
4713
4714 static tree
4715 friend_from_decl_list (tree frnd)
4716 {
4717 tree res = frnd;
4718
4719 if (TREE_CODE (frnd) != TEMPLATE_DECL)
4720 {
4721 tree tmpl = NULL_TREE;
4722 if (TYPE_P (frnd))
4723 {
4724 res = TYPE_NAME (frnd);
4725 if (CLASSTYPE_TEMPLATE_INFO (frnd))
4726 tmpl = CLASSTYPE_TI_TEMPLATE (frnd);
4727 }
4728 else if (DECL_TEMPLATE_INFO (frnd))
4729 {
4730 tmpl = DECL_TI_TEMPLATE (frnd);
4731 if (TREE_CODE (tmpl) != TEMPLATE_DECL)
4732 tmpl = NULL_TREE;
4733 }
4734
4735 if (tmpl && DECL_TEMPLATE_RESULT (tmpl) == res)
4736 res = tmpl;
4737 }
4738
4739 return res;
4740 }
4741
4742 static tree
4743 find_enum_member (tree ctx, tree name)
4744 {
4745 for (tree values = TYPE_VALUES (ctx);
4746 values; values = TREE_CHAIN (values))
4747 if (DECL_NAME (TREE_VALUE (values)) == name)
4748 return TREE_VALUE (values);
4749
4750 return NULL_TREE;
4751 }
4752
4753 /********************************************************************/
4754 /* Instrumentation gathered writing bytes. */
4755
4756 void
4757 bytes_out::instrument ()
4758 {
4759 dump ("Wrote %u bytes in %u blocks", lengths[3], spans[3]);
4760 dump ("Wrote %u bits in %u bytes", lengths[0] + lengths[1], lengths[2]);
4761 for (unsigned ix = 0; ix < 2; ix++)
4762 dump (" %u %s spans of %R bits", spans[ix],
4763 ix ? "one" : "zero", lengths[ix], spans[ix]);
4764 dump (" %u blocks with %R bits padding", spans[2],
4765 lengths[2] * 8 - (lengths[0] + lengths[1]), spans[2]);
4766 }
4767
4768 /* Instrumentation gathered writing trees. */
4769 void
4770 trees_out::instrument ()
4771 {
4772 if (dump (""))
4773 {
4774 bytes_out::instrument ();
4775 dump ("Wrote:");
4776 dump (" %u decl trees", decl_val_count);
4777 dump (" %u other trees", tree_val_count);
4778 dump (" %u back references", back_ref_count);
4779 dump (" %u null trees", null_count);
4780 }
4781 }
4782
4783 /* Setup and teardown for a tree walk. */
4784
4785 void
4786 trees_out::begin ()
4787 {
4788 gcc_assert (!streaming_p () || !tree_map.elements ());
4789
4790 mark_trees ();
4791 if (streaming_p ())
4792 parent::begin ();
4793 }
4794
4795 unsigned
4796 trees_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
4797 {
4798 gcc_checking_assert (streaming_p ());
4799
4800 unmark_trees ();
4801 return parent::end (sink, name, crc_ptr);
4802 }
4803
4804 void
4805 trees_out::end ()
4806 {
4807 gcc_assert (!streaming_p ());
4808
4809 unmark_trees ();
4810 /* Do not parent::end -- we weren't streaming. */
4811 }
4812
4813 void
4814 trees_out::mark_trees ()
4815 {
4816 if (size_t size = tree_map.elements ())
4817 {
4818 /* This isn't our first rodeo, destroy and recreate the
4819 tree_map. I'm a bad bad man. Use the previous size as a
4820 guess for the next one (so not all bad). */
4821 tree_map.~ptr_int_hash_map ();
4822 new (&tree_map) ptr_int_hash_map (size);
4823 }
4824
4825 /* Install the fixed trees, with +ve references. */
4826 unsigned limit = fixed_trees->length ();
4827 for (unsigned ix = 0; ix != limit; ix++)
4828 {
4829 tree val = (*fixed_trees)[ix];
4830 bool existed = tree_map.put (val, ix + tag_fixed);
4831 gcc_checking_assert (!TREE_VISITED (val) && !existed);
4832 TREE_VISITED (val) = true;
4833 }
4834
4835 ref_num = 0;
4836 }
4837
4838 /* Unmark the trees we encountered */
4839
4840 void
4841 trees_out::unmark_trees ()
4842 {
4843 ptr_int_hash_map::iterator end (tree_map.end ());
4844 for (ptr_int_hash_map::iterator iter (tree_map.begin ()); iter != end; ++iter)
4845 {
4846 tree node = reinterpret_cast<tree> ((*iter).first);
4847 int ref = (*iter).second;
4848 /* We should have visited the node, and converted its mergeable
4849 reference to a regular reference. */
4850 gcc_checking_assert (TREE_VISITED (node)
4851 && (ref <= tag_backref || ref >= tag_fixed));
4852 TREE_VISITED (node) = false;
4853 }
4854 }
4855
4856 /* Mark DECL for by-value walking. We do this by inserting it into
4857 the tree map with a reference of zero. May be called multiple
4858 times on the same node. */
4859
4860 void
4861 trees_out::mark_by_value (tree decl)
4862 {
4863 gcc_checking_assert (DECL_P (decl)
4864 /* Enum consts are INTEGER_CSTS. */
4865 || TREE_CODE (decl) == INTEGER_CST
4866 || TREE_CODE (decl) == TREE_BINFO);
4867
4868 if (TREE_VISITED (decl))
4869 /* Must already be forced or fixed. */
4870 gcc_checking_assert (*tree_map.get (decl) >= tag_value);
4871 else
4872 {
4873 bool existed = tree_map.put (decl, tag_value);
4874 gcc_checking_assert (!existed);
4875 TREE_VISITED (decl) = true;
4876 }
4877 }
4878
4879 int
4880 trees_out::get_tag (tree t)
4881 {
4882 gcc_checking_assert (TREE_VISITED (t));
4883 return *tree_map.get (t);
4884 }
4885
4886 /* Insert T into the map, return its tag number. */
4887
4888 int
4889 trees_out::insert (tree t, walk_kind walk)
4890 {
4891 gcc_checking_assert (walk != WK_normal || !TREE_VISITED (t));
4892 int tag = --ref_num;
4893 bool existed;
4894 int &slot = tree_map.get_or_insert (t, &existed);
4895 gcc_checking_assert (TREE_VISITED (t) == existed
4896 && (!existed
4897 || (walk == WK_value && slot == tag_value)));
4898 TREE_VISITED (t) = true;
4899 slot = tag;
4900
4901 return tag;
4902 }
4903
4904 /* Insert T into the backreference array. Return its back reference
4905 number. */
4906
4907 int
4908 trees_in::insert (tree t)
4909 {
4910 gcc_checking_assert (t || get_overrun ());
4911 back_refs.safe_push (t);
4912 return -(int)back_refs.length ();
4913 }
4914
4915 /* A chained set of decls. */
4916
4917 void
4918 trees_out::chained_decls (tree decls)
4919 {
4920 for (; decls; decls = DECL_CHAIN (decls))
4921 {
4922 if (VAR_OR_FUNCTION_DECL_P (decls)
4923 && DECL_LOCAL_DECL_P (decls))
4924 {
4925 /* Make sure this is the first encounter, and mark for
4926 walk-by-value. */
4927 gcc_checking_assert (!TREE_VISITED (decls)
4928 && !DECL_TEMPLATE_INFO (decls));
4929 mark_by_value (decls);
4930 }
4931 tree_node (decls);
4932 }
4933 tree_node (NULL_TREE);
4934 }
4935
4936 tree
4937 trees_in::chained_decls ()
4938 {
4939 tree decls = NULL_TREE;
4940 for (tree *chain = &decls;;)
4941 if (tree decl = tree_node ())
4942 {
4943 if (!DECL_P (decl) || DECL_CHAIN (decl))
4944 {
4945 set_overrun ();
4946 break;
4947 }
4948 *chain = decl;
4949 chain = &DECL_CHAIN (decl);
4950 }
4951 else
4952 break;
4953
4954 return decls;
4955 }
4956
4957 /* A vector of decls following DECL_CHAIN. */
4958
4959 void
4960 trees_out::vec_chained_decls (tree decls)
4961 {
4962 if (streaming_p ())
4963 {
4964 unsigned len = 0;
4965
4966 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
4967 len++;
4968 u (len);
4969 }
4970
4971 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
4972 {
4973 if (DECL_IMPLICIT_TYPEDEF_P (decl)
4974 && TYPE_NAME (TREE_TYPE (decl)) != decl)
4975 /* An anonynmous struct with a typedef name. An odd thing to
4976 write. */
4977 tree_node (NULL_TREE);
4978 else
4979 tree_node (decl);
4980 }
4981 }
4982
4983 vec<tree, va_heap> *
4984 trees_in::vec_chained_decls ()
4985 {
4986 vec<tree, va_heap> *v = NULL;
4987
4988 if (unsigned len = u ())
4989 {
4990 vec_alloc (v, len);
4991
4992 for (unsigned ix = 0; ix < len; ix++)
4993 {
4994 tree decl = tree_node ();
4995 if (decl && !DECL_P (decl))
4996 {
4997 set_overrun ();
4998 break;
4999 }
5000 v->quick_push (decl);
5001 }
5002
5003 if (get_overrun ())
5004 {
5005 vec_free (v);
5006 v = NULL;
5007 }
5008 }
5009
5010 return v;
5011 }
5012
5013 /* A vector of trees. */
5014
5015 void
5016 trees_out::tree_vec (vec<tree, va_gc> *v)
5017 {
5018 unsigned len = vec_safe_length (v);
5019 if (streaming_p ())
5020 u (len);
5021 for (unsigned ix = 0; ix != len; ix++)
5022 tree_node ((*v)[ix]);
5023 }
5024
5025 vec<tree, va_gc> *
5026 trees_in::tree_vec ()
5027 {
5028 vec<tree, va_gc> *v = NULL;
5029 if (unsigned len = u ())
5030 {
5031 vec_alloc (v, len);
5032 for (unsigned ix = 0; ix != len; ix++)
5033 v->quick_push (tree_node ());
5034 }
5035 return v;
5036 }
5037
5038 /* A vector of tree pairs. */
5039
5040 void
5041 trees_out::tree_pair_vec (vec<tree_pair_s, va_gc> *v)
5042 {
5043 unsigned len = vec_safe_length (v);
5044 if (streaming_p ())
5045 u (len);
5046 if (len)
5047 for (unsigned ix = 0; ix != len; ix++)
5048 {
5049 tree_pair_s const &s = (*v)[ix];
5050 tree_node (s.purpose);
5051 tree_node (s.value);
5052 }
5053 }
5054
5055 vec<tree_pair_s, va_gc> *
5056 trees_in::tree_pair_vec ()
5057 {
5058 vec<tree_pair_s, va_gc> *v = NULL;
5059 if (unsigned len = u ())
5060 {
5061 vec_alloc (v, len);
5062 for (unsigned ix = 0; ix != len; ix++)
5063 {
5064 tree_pair_s s;
5065 s.purpose = tree_node ();
5066 s.value = tree_node ();
5067 v->quick_push (s);
5068 }
5069 }
5070 return v;
5071 }
5072
5073 void
5074 trees_out::tree_list (tree list, bool has_purpose)
5075 {
5076 for (; list; list = TREE_CHAIN (list))
5077 {
5078 gcc_checking_assert (TREE_VALUE (list));
5079 tree_node (TREE_VALUE (list));
5080 if (has_purpose)
5081 tree_node (TREE_PURPOSE (list));
5082 }
5083 tree_node (NULL_TREE);
5084 }
5085
5086 tree
5087 trees_in::tree_list (bool has_purpose)
5088 {
5089 tree res = NULL_TREE;
5090
5091 for (tree *chain = &res; tree value = tree_node ();
5092 chain = &TREE_CHAIN (*chain))
5093 {
5094 tree purpose = has_purpose ? tree_node () : NULL_TREE;
5095 *chain = build_tree_list (purpose, value);
5096 }
5097
5098 return res;
5099 }
5100 /* Start tree write. Write information to allocate the receiving
5101 node. */
5102
5103 void
5104 trees_out::start (tree t, bool code_streamed)
5105 {
5106 if (TYPE_P (t))
5107 {
5108 enum tree_code code = TREE_CODE (t);
5109 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
5110 /* All these types are TYPE_NON_COMMON. */
5111 gcc_checking_assert (code == RECORD_TYPE
5112 || code == UNION_TYPE
5113 || code == ENUMERAL_TYPE
5114 || code == TEMPLATE_TYPE_PARM
5115 || code == TEMPLATE_TEMPLATE_PARM
5116 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
5117 }
5118
5119 if (!code_streamed)
5120 u (TREE_CODE (t));
5121
5122 switch (TREE_CODE (t))
5123 {
5124 default:
5125 if (TREE_CODE_CLASS (TREE_CODE (t)) == tcc_vl_exp)
5126 u (VL_EXP_OPERAND_LENGTH (t));
5127 break;
5128
5129 case INTEGER_CST:
5130 u (TREE_INT_CST_NUNITS (t));
5131 u (TREE_INT_CST_EXT_NUNITS (t));
5132 u (TREE_INT_CST_OFFSET_NUNITS (t));
5133 break;
5134
5135 case OMP_CLAUSE:
5136 state->extensions |= SE_OPENMP;
5137 u (OMP_CLAUSE_CODE (t));
5138 break;
5139
5140 case STRING_CST:
5141 str (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t));
5142 break;
5143
5144 case VECTOR_CST:
5145 u (VECTOR_CST_LOG2_NPATTERNS (t));
5146 u (VECTOR_CST_NELTS_PER_PATTERN (t));
5147 break;
5148
5149 case TREE_BINFO:
5150 u (BINFO_N_BASE_BINFOS (t));
5151 break;
5152
5153 case TREE_VEC:
5154 u (TREE_VEC_LENGTH (t));
5155 break;
5156
5157 case FIXED_CST:
5158 case POLY_INT_CST:
5159 gcc_unreachable (); /* Not supported in C++. */
5160 break;
5161
5162 case IDENTIFIER_NODE:
5163 case SSA_NAME:
5164 case TARGET_MEM_REF:
5165 case TRANSLATION_UNIT_DECL:
5166 /* We shouldn't meet these. */
5167 gcc_unreachable ();
5168 break;
5169 }
5170 }
5171
5172 /* Start tree read. Allocate the receiving node. */
5173
5174 tree
5175 trees_in::start (unsigned code)
5176 {
5177 tree t = NULL_TREE;
5178
5179 if (!code)
5180 code = u ();
5181
5182 switch (code)
5183 {
5184 default:
5185 if (code >= MAX_TREE_CODES)
5186 {
5187 fail:
5188 set_overrun ();
5189 return NULL_TREE;
5190 }
5191 else if (TREE_CODE_CLASS (code) == tcc_vl_exp)
5192 {
5193 unsigned ops = u ();
5194 t = build_vl_exp (tree_code (code), ops);
5195 }
5196 else
5197 t = make_node (tree_code (code));
5198 break;
5199
5200 case INTEGER_CST:
5201 {
5202 unsigned n = u ();
5203 unsigned e = u ();
5204 t = make_int_cst (n, e);
5205 TREE_INT_CST_OFFSET_NUNITS(t) = u ();
5206 }
5207 break;
5208
5209 case OMP_CLAUSE:
5210 {
5211 if (!(state->extensions & SE_OPENMP))
5212 goto fail;
5213
5214 unsigned omp_code = u ();
5215 t = build_omp_clause (UNKNOWN_LOCATION, omp_clause_code (omp_code));
5216 }
5217 break;
5218
5219 case STRING_CST:
5220 {
5221 size_t l;
5222 const char *chars = str (&l);
5223 t = build_string (l, chars);
5224 }
5225 break;
5226
5227 case VECTOR_CST:
5228 {
5229 unsigned log2_npats = u ();
5230 unsigned elts_per = u ();
5231 t = make_vector (log2_npats, elts_per);
5232 }
5233 break;
5234
5235 case TREE_BINFO:
5236 t = make_tree_binfo (u ());
5237 break;
5238
5239 case TREE_VEC:
5240 t = make_tree_vec (u ());
5241 break;
5242
5243 case FIXED_CST:
5244 case IDENTIFIER_NODE:
5245 case POLY_INT_CST:
5246 case SSA_NAME:
5247 case TARGET_MEM_REF:
5248 case TRANSLATION_UNIT_DECL:
5249 goto fail;
5250 }
5251
5252 return t;
5253 }
5254
5255 /* The structure streamers access the raw fields, because the
5256 alternative, of using the accessor macros can require using
5257 different accessors for the same underlying field, depending on the
5258 tree code. That's both confusing and annoying. */
5259
5260 /* Read & write the core boolean flags. */
5261
5262 void
5263 trees_out::core_bools (tree t)
5264 {
5265 #define WB(X) (b (X))
5266 tree_code code = TREE_CODE (t);
5267
5268 WB (t->base.side_effects_flag);
5269 WB (t->base.constant_flag);
5270 WB (t->base.addressable_flag);
5271 WB (t->base.volatile_flag);
5272 WB (t->base.readonly_flag);
5273 /* base.asm_written_flag is a property of the current TU's use of
5274 this decl. */
5275 WB (t->base.nowarning_flag);
5276 /* base.visited read as zero (it's set for writer, because that's
5277 how we mark nodes). */
5278 /* base.used_flag is not streamed. Readers may set TREE_USED of
5279 decls they use. */
5280 WB (t->base.nothrow_flag);
5281 WB (t->base.static_flag);
5282 if (TREE_CODE_CLASS (code) != tcc_type)
5283 /* This is TYPE_CACHED_VALUES_P for types. */
5284 WB (t->base.public_flag);
5285 WB (t->base.private_flag);
5286 WB (t->base.protected_flag);
5287 WB (t->base.deprecated_flag);
5288 WB (t->base.default_def_flag);
5289
5290 switch (code)
5291 {
5292 case CALL_EXPR:
5293 case INTEGER_CST:
5294 case SSA_NAME:
5295 case TARGET_MEM_REF:
5296 case TREE_VEC:
5297 /* These use different base.u fields. */
5298 break;
5299
5300 default:
5301 WB (t->base.u.bits.lang_flag_0);
5302 bool flag_1 = t->base.u.bits.lang_flag_1;
5303 if (!flag_1)
5304 ;
5305 else if (code == TEMPLATE_INFO)
5306 /* This is TI_PENDING_TEMPLATE_FLAG, not relevant to reader. */
5307 flag_1 = false;
5308 else if (code == VAR_DECL)
5309 {
5310 /* This is DECL_INITIALIZED_P. */
5311 if (DECL_CONTEXT (t)
5312 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
5313 /* We'll set this when reading the definition. */
5314 flag_1 = false;
5315 }
5316 WB (flag_1);
5317 WB (t->base.u.bits.lang_flag_2);
5318 WB (t->base.u.bits.lang_flag_3);
5319 WB (t->base.u.bits.lang_flag_4);
5320 WB (t->base.u.bits.lang_flag_5);
5321 WB (t->base.u.bits.lang_flag_6);
5322 WB (t->base.u.bits.saturating_flag);
5323 WB (t->base.u.bits.unsigned_flag);
5324 WB (t->base.u.bits.packed_flag);
5325 WB (t->base.u.bits.user_align);
5326 WB (t->base.u.bits.nameless_flag);
5327 WB (t->base.u.bits.atomic_flag);
5328 break;
5329 }
5330
5331 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
5332 {
5333 WB (t->type_common.no_force_blk_flag);
5334 WB (t->type_common.needs_constructing_flag);
5335 WB (t->type_common.transparent_aggr_flag);
5336 WB (t->type_common.restrict_flag);
5337 WB (t->type_common.string_flag);
5338 WB (t->type_common.lang_flag_0);
5339 WB (t->type_common.lang_flag_1);
5340 WB (t->type_common.lang_flag_2);
5341 WB (t->type_common.lang_flag_3);
5342 WB (t->type_common.lang_flag_4);
5343 WB (t->type_common.lang_flag_5);
5344 WB (t->type_common.lang_flag_6);
5345 WB (t->type_common.typeless_storage);
5346 }
5347
5348 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5349 {
5350 WB (t->decl_common.nonlocal_flag);
5351 WB (t->decl_common.virtual_flag);
5352 WB (t->decl_common.ignored_flag);
5353 WB (t->decl_common.abstract_flag);
5354 WB (t->decl_common.artificial_flag);
5355 WB (t->decl_common.preserve_flag);
5356 WB (t->decl_common.debug_expr_is_from);
5357 WB (t->decl_common.lang_flag_0);
5358 WB (t->decl_common.lang_flag_1);
5359 WB (t->decl_common.lang_flag_2);
5360 WB (t->decl_common.lang_flag_3);
5361 WB (t->decl_common.lang_flag_4);
5362 WB (t->decl_common.lang_flag_5);
5363 WB (t->decl_common.lang_flag_6);
5364 WB (t->decl_common.lang_flag_7);
5365 WB (t->decl_common.lang_flag_8);
5366 WB (t->decl_common.decl_flag_0);
5367
5368 {
5369 /* DECL_EXTERNAL -> decl_flag_1
5370 == it is defined elsewhere
5371 DECL_NOT_REALLY_EXTERN -> base.not_really_extern
5372 == that was a lie, it is here */
5373
5374 bool is_external = t->decl_common.decl_flag_1;
5375 if (!is_external)
5376 /* decl_flag_1 is DECL_EXTERNAL. Things we emit here, might
5377 well be external from the POV of an importer. */
5378 // FIXME: Do we need to know if this is a TEMPLATE_RESULT --
5379 // a flag from the caller?
5380 switch (code)
5381 {
5382 default:
5383 break;
5384
5385 case VAR_DECL:
5386 if (TREE_PUBLIC (t)
5387 && !DECL_VAR_DECLARED_INLINE_P (t))
5388 is_external = true;
5389 break;
5390
5391 case FUNCTION_DECL:
5392 if (TREE_PUBLIC (t)
5393 && !DECL_DECLARED_INLINE_P (t))
5394 is_external = true;
5395 break;
5396 }
5397 WB (is_external);
5398 }
5399
5400 WB (t->decl_common.decl_flag_2);
5401 WB (t->decl_common.decl_flag_3);
5402 WB (t->decl_common.not_gimple_reg_flag);
5403 WB (t->decl_common.decl_by_reference_flag);
5404 WB (t->decl_common.decl_read_flag);
5405 WB (t->decl_common.decl_nonshareable_flag);
5406 }
5407
5408 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5409 {
5410 WB (t->decl_with_vis.defer_output);
5411 WB (t->decl_with_vis.hard_register);
5412 WB (t->decl_with_vis.common_flag);
5413 WB (t->decl_with_vis.in_text_section);
5414 WB (t->decl_with_vis.in_constant_pool);
5415 WB (t->decl_with_vis.dllimport_flag);
5416 WB (t->decl_with_vis.weak_flag);
5417 WB (t->decl_with_vis.seen_in_bind_expr);
5418 WB (t->decl_with_vis.comdat_flag);
5419 WB (t->decl_with_vis.visibility_specified);
5420 WB (t->decl_with_vis.init_priority_p);
5421 WB (t->decl_with_vis.shadowed_for_var_p);
5422 WB (t->decl_with_vis.cxx_constructor);
5423 WB (t->decl_with_vis.cxx_destructor);
5424 WB (t->decl_with_vis.final);
5425 WB (t->decl_with_vis.regdecl_flag);
5426 }
5427
5428 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5429 {
5430 WB (t->function_decl.static_ctor_flag);
5431 WB (t->function_decl.static_dtor_flag);
5432 WB (t->function_decl.uninlinable);
5433 WB (t->function_decl.possibly_inlined);
5434 WB (t->function_decl.novops_flag);
5435 WB (t->function_decl.returns_twice_flag);
5436 WB (t->function_decl.malloc_flag);
5437 WB (t->function_decl.declared_inline_flag);
5438 WB (t->function_decl.no_inline_warning_flag);
5439 WB (t->function_decl.no_instrument_function_entry_exit);
5440 WB (t->function_decl.no_limit_stack);
5441 WB (t->function_decl.disregard_inline_limits);
5442 WB (t->function_decl.pure_flag);
5443 WB (t->function_decl.looping_const_or_pure_flag);
5444
5445 WB (t->function_decl.has_debug_args_flag);
5446 WB (t->function_decl.versioned_function);
5447
5448 /* decl_type is a (misnamed) 2 bit discriminator. */
5449 unsigned kind = t->function_decl.decl_type;
5450 WB ((kind >> 0) & 1);
5451 WB ((kind >> 1) & 1);
5452 }
5453 #undef WB
5454 }
5455
5456 bool
5457 trees_in::core_bools (tree t)
5458 {
5459 #define RB(X) ((X) = b ())
5460 tree_code code = TREE_CODE (t);
5461
5462 RB (t->base.side_effects_flag);
5463 RB (t->base.constant_flag);
5464 RB (t->base.addressable_flag);
5465 RB (t->base.volatile_flag);
5466 RB (t->base.readonly_flag);
5467 /* base.asm_written_flag is not streamed. */
5468 RB (t->base.nowarning_flag);
5469 /* base.visited is not streamed. */
5470 /* base.used_flag is not streamed. */
5471 RB (t->base.nothrow_flag);
5472 RB (t->base.static_flag);
5473 if (TREE_CODE_CLASS (code) != tcc_type)
5474 RB (t->base.public_flag);
5475 RB (t->base.private_flag);
5476 RB (t->base.protected_flag);
5477 RB (t->base.deprecated_flag);
5478 RB (t->base.default_def_flag);
5479
5480 switch (code)
5481 {
5482 case CALL_EXPR:
5483 case INTEGER_CST:
5484 case SSA_NAME:
5485 case TARGET_MEM_REF:
5486 case TREE_VEC:
5487 /* These use different base.u fields. */
5488 break;
5489
5490 default:
5491 RB (t->base.u.bits.lang_flag_0);
5492 RB (t->base.u.bits.lang_flag_1);
5493 RB (t->base.u.bits.lang_flag_2);
5494 RB (t->base.u.bits.lang_flag_3);
5495 RB (t->base.u.bits.lang_flag_4);
5496 RB (t->base.u.bits.lang_flag_5);
5497 RB (t->base.u.bits.lang_flag_6);
5498 RB (t->base.u.bits.saturating_flag);
5499 RB (t->base.u.bits.unsigned_flag);
5500 RB (t->base.u.bits.packed_flag);
5501 RB (t->base.u.bits.user_align);
5502 RB (t->base.u.bits.nameless_flag);
5503 RB (t->base.u.bits.atomic_flag);
5504 break;
5505 }
5506
5507 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
5508 {
5509 RB (t->type_common.no_force_blk_flag);
5510 RB (t->type_common.needs_constructing_flag);
5511 RB (t->type_common.transparent_aggr_flag);
5512 RB (t->type_common.restrict_flag);
5513 RB (t->type_common.string_flag);
5514 RB (t->type_common.lang_flag_0);
5515 RB (t->type_common.lang_flag_1);
5516 RB (t->type_common.lang_flag_2);
5517 RB (t->type_common.lang_flag_3);
5518 RB (t->type_common.lang_flag_4);
5519 RB (t->type_common.lang_flag_5);
5520 RB (t->type_common.lang_flag_6);
5521 RB (t->type_common.typeless_storage);
5522 }
5523
5524 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5525 {
5526 RB (t->decl_common.nonlocal_flag);
5527 RB (t->decl_common.virtual_flag);
5528 RB (t->decl_common.ignored_flag);
5529 RB (t->decl_common.abstract_flag);
5530 RB (t->decl_common.artificial_flag);
5531 RB (t->decl_common.preserve_flag);
5532 RB (t->decl_common.debug_expr_is_from);
5533 RB (t->decl_common.lang_flag_0);
5534 RB (t->decl_common.lang_flag_1);
5535 RB (t->decl_common.lang_flag_2);
5536 RB (t->decl_common.lang_flag_3);
5537 RB (t->decl_common.lang_flag_4);
5538 RB (t->decl_common.lang_flag_5);
5539 RB (t->decl_common.lang_flag_6);
5540 RB (t->decl_common.lang_flag_7);
5541 RB (t->decl_common.lang_flag_8);
5542 RB (t->decl_common.decl_flag_0);
5543 RB (t->decl_common.decl_flag_1);
5544 RB (t->decl_common.decl_flag_2);
5545 RB (t->decl_common.decl_flag_3);
5546 RB (t->decl_common.not_gimple_reg_flag);
5547 RB (t->decl_common.decl_by_reference_flag);
5548 RB (t->decl_common.decl_read_flag);
5549 RB (t->decl_common.decl_nonshareable_flag);
5550 }
5551
5552 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5553 {
5554 RB (t->decl_with_vis.defer_output);
5555 RB (t->decl_with_vis.hard_register);
5556 RB (t->decl_with_vis.common_flag);
5557 RB (t->decl_with_vis.in_text_section);
5558 RB (t->decl_with_vis.in_constant_pool);
5559 RB (t->decl_with_vis.dllimport_flag);
5560 RB (t->decl_with_vis.weak_flag);
5561 RB (t->decl_with_vis.seen_in_bind_expr);
5562 RB (t->decl_with_vis.comdat_flag);
5563 RB (t->decl_with_vis.visibility_specified);
5564 RB (t->decl_with_vis.init_priority_p);
5565 RB (t->decl_with_vis.shadowed_for_var_p);
5566 RB (t->decl_with_vis.cxx_constructor);
5567 RB (t->decl_with_vis.cxx_destructor);
5568 RB (t->decl_with_vis.final);
5569 RB (t->decl_with_vis.regdecl_flag);
5570 }
5571
5572 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5573 {
5574 RB (t->function_decl.static_ctor_flag);
5575 RB (t->function_decl.static_dtor_flag);
5576 RB (t->function_decl.uninlinable);
5577 RB (t->function_decl.possibly_inlined);
5578 RB (t->function_decl.novops_flag);
5579 RB (t->function_decl.returns_twice_flag);
5580 RB (t->function_decl.malloc_flag);
5581 RB (t->function_decl.declared_inline_flag);
5582 RB (t->function_decl.no_inline_warning_flag);
5583 RB (t->function_decl.no_instrument_function_entry_exit);
5584 RB (t->function_decl.no_limit_stack);
5585 RB (t->function_decl.disregard_inline_limits);
5586 RB (t->function_decl.pure_flag);
5587 RB (t->function_decl.looping_const_or_pure_flag);
5588
5589 RB (t->function_decl.has_debug_args_flag);
5590 RB (t->function_decl.versioned_function);
5591
5592 /* decl_type is a (misnamed) 2 bit discriminator. */
5593 unsigned kind = 0;
5594 kind |= unsigned (b ()) << 0;
5595 kind |= unsigned (b ()) << 1;
5596 t->function_decl.decl_type = function_decl_type (kind);
5597 }
5598 #undef RB
5599 return !get_overrun ();
5600 }
5601
5602 void
5603 trees_out::lang_decl_bools (tree t)
5604 {
5605 #define WB(X) (b (X))
5606 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5607
5608 WB (lang->u.base.language == lang_cplusplus);
5609 WB ((lang->u.base.use_template >> 0) & 1);
5610 WB ((lang->u.base.use_template >> 1) & 1);
5611 /* Do not write lang->u.base.not_really_extern, importer will set
5612 when reading the definition (if any). */
5613 WB (lang->u.base.initialized_in_class);
5614 WB (lang->u.base.threadprivate_or_deleted_p);
5615 /* Do not write lang->u.base.anticipated_p, it is a property of the
5616 current TU. */
5617 WB (lang->u.base.friend_or_tls);
5618 WB (lang->u.base.unknown_bound_p);
5619 /* Do not write lang->u.base.odr_used, importer will recalculate if
5620 they do ODR use this decl. */
5621 WB (lang->u.base.concept_p);
5622 WB (lang->u.base.var_declared_inline_p);
5623 WB (lang->u.base.dependent_init_p);
5624 WB (lang->u.base.module_purview_p);
5625 if (VAR_OR_FUNCTION_DECL_P (t))
5626 WB (lang->u.base.module_pending_p);
5627 switch (lang->u.base.selector)
5628 {
5629 default:
5630 gcc_unreachable ();
5631
5632 case lds_fn: /* lang_decl_fn. */
5633 WB (lang->u.fn.global_ctor_p);
5634 WB (lang->u.fn.global_dtor_p);
5635 WB (lang->u.fn.static_function);
5636 WB (lang->u.fn.pure_virtual);
5637 WB (lang->u.fn.defaulted_p);
5638 WB (lang->u.fn.has_in_charge_parm_p);
5639 WB (lang->u.fn.has_vtt_parm_p);
5640 /* There shouldn't be a pending inline at this point. */
5641 gcc_assert (!lang->u.fn.pending_inline_p);
5642 WB (lang->u.fn.nonconverting);
5643 WB (lang->u.fn.thunk_p);
5644 WB (lang->u.fn.this_thunk_p);
5645 /* Do not stream lang->u.hidden_friend_p, it is a property of
5646 the TU. */
5647 WB (lang->u.fn.omp_declare_reduction_p);
5648 WB (lang->u.fn.has_dependent_explicit_spec_p);
5649 WB (lang->u.fn.immediate_fn_p);
5650 WB (lang->u.fn.maybe_deleted);
5651 goto lds_min;
5652
5653 case lds_decomp: /* lang_decl_decomp. */
5654 /* No bools. */
5655 goto lds_min;
5656
5657 case lds_min: /* lang_decl_min. */
5658 lds_min:
5659 /* No bools. */
5660 break;
5661
5662 case lds_ns: /* lang_decl_ns. */
5663 /* No bools. */
5664 break;
5665
5666 case lds_parm: /* lang_decl_parm. */
5667 /* No bools. */
5668 break;
5669 }
5670 #undef WB
5671 }
5672
5673 bool
5674 trees_in::lang_decl_bools (tree t)
5675 {
5676 #define RB(X) ((X) = b ())
5677 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5678
5679 lang->u.base.language = b () ? lang_cplusplus : lang_c;
5680 unsigned v;
5681 v = b () << 0;
5682 v |= b () << 1;
5683 lang->u.base.use_template = v;
5684 /* lang->u.base.not_really_extern is not streamed. */
5685 RB (lang->u.base.initialized_in_class);
5686 RB (lang->u.base.threadprivate_or_deleted_p);
5687 /* lang->u.base.anticipated_p is not streamed. */
5688 RB (lang->u.base.friend_or_tls);
5689 RB (lang->u.base.unknown_bound_p);
5690 /* lang->u.base.odr_used is not streamed. */
5691 RB (lang->u.base.concept_p);
5692 RB (lang->u.base.var_declared_inline_p);
5693 RB (lang->u.base.dependent_init_p);
5694 RB (lang->u.base.module_purview_p);
5695 if (VAR_OR_FUNCTION_DECL_P (t))
5696 RB (lang->u.base.module_pending_p);
5697 switch (lang->u.base.selector)
5698 {
5699 default:
5700 gcc_unreachable ();
5701
5702 case lds_fn: /* lang_decl_fn. */
5703 RB (lang->u.fn.global_ctor_p);
5704 RB (lang->u.fn.global_dtor_p);
5705 RB (lang->u.fn.static_function);
5706 RB (lang->u.fn.pure_virtual);
5707 RB (lang->u.fn.defaulted_p);
5708 RB (lang->u.fn.has_in_charge_parm_p);
5709 RB (lang->u.fn.has_vtt_parm_p);
5710 RB (lang->u.fn.nonconverting);
5711 RB (lang->u.fn.thunk_p);
5712 RB (lang->u.fn.this_thunk_p);
5713 /* lang->u.fn.hidden_friend_p is not streamed. */
5714 RB (lang->u.fn.omp_declare_reduction_p);
5715 RB (lang->u.fn.has_dependent_explicit_spec_p);
5716 RB (lang->u.fn.immediate_fn_p);
5717 RB (lang->u.fn.maybe_deleted);
5718 goto lds_min;
5719
5720 case lds_decomp: /* lang_decl_decomp. */
5721 /* No bools. */
5722 goto lds_min;
5723
5724 case lds_min: /* lang_decl_min. */
5725 lds_min:
5726 /* No bools. */
5727 break;
5728
5729 case lds_ns: /* lang_decl_ns. */
5730 /* No bools. */
5731 break;
5732
5733 case lds_parm: /* lang_decl_parm. */
5734 /* No bools. */
5735 break;
5736 }
5737 #undef RB
5738 return !get_overrun ();
5739 }
5740
5741 void
5742 trees_out::lang_type_bools (tree t)
5743 {
5744 #define WB(X) (b (X))
5745 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5746
5747 WB (lang->has_type_conversion);
5748 WB (lang->has_copy_ctor);
5749 WB (lang->has_default_ctor);
5750 WB (lang->const_needs_init);
5751 WB (lang->ref_needs_init);
5752 WB (lang->has_const_copy_assign);
5753 WB ((lang->use_template >> 0) & 1);
5754 WB ((lang->use_template >> 1) & 1);
5755
5756 WB (lang->has_mutable);
5757 WB (lang->com_interface);
5758 WB (lang->non_pod_class);
5759 WB (lang->nearly_empty_p);
5760 WB (lang->user_align);
5761 WB (lang->has_copy_assign);
5762 WB (lang->has_new);
5763 WB (lang->has_array_new);
5764
5765 WB ((lang->gets_delete >> 0) & 1);
5766 WB ((lang->gets_delete >> 1) & 1);
5767 // Interfaceness is recalculated upon reading. May have to revisit?
5768 // How do dllexport and dllimport interact across a module?
5769 // lang->interface_only
5770 // lang->interface_unknown
5771 WB (lang->contains_empty_class_p);
5772 WB (lang->anon_aggr);
5773 WB (lang->non_zero_init);
5774 WB (lang->empty_p);
5775
5776 WB (lang->vec_new_uses_cookie);
5777 WB (lang->declared_class);
5778 WB (lang->diamond_shaped);
5779 WB (lang->repeated_base);
5780 gcc_assert (!lang->being_defined);
5781 // lang->debug_requested
5782 WB (lang->fields_readonly);
5783 WB (lang->ptrmemfunc_flag);
5784
5785 WB (lang->lazy_default_ctor);
5786 WB (lang->lazy_copy_ctor);
5787 WB (lang->lazy_copy_assign);
5788 WB (lang->lazy_destructor);
5789 WB (lang->has_const_copy_ctor);
5790 WB (lang->has_complex_copy_ctor);
5791 WB (lang->has_complex_copy_assign);
5792 WB (lang->non_aggregate);
5793
5794 WB (lang->has_complex_dflt);
5795 WB (lang->has_list_ctor);
5796 WB (lang->non_std_layout);
5797 WB (lang->is_literal);
5798 WB (lang->lazy_move_ctor);
5799 WB (lang->lazy_move_assign);
5800 WB (lang->has_complex_move_ctor);
5801 WB (lang->has_complex_move_assign);
5802
5803 WB (lang->has_constexpr_ctor);
5804 WB (lang->unique_obj_representations);
5805 WB (lang->unique_obj_representations_set);
5806 #undef WB
5807 }
5808
5809 bool
5810 trees_in::lang_type_bools (tree t)
5811 {
5812 #define RB(X) ((X) = b ())
5813 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5814
5815 RB (lang->has_type_conversion);
5816 RB (lang->has_copy_ctor);
5817 RB (lang->has_default_ctor);
5818 RB (lang->const_needs_init);
5819 RB (lang->ref_needs_init);
5820 RB (lang->has_const_copy_assign);
5821 unsigned v;
5822 v = b () << 0;
5823 v |= b () << 1;
5824 lang->use_template = v;
5825
5826 RB (lang->has_mutable);
5827 RB (lang->com_interface);
5828 RB (lang->non_pod_class);
5829 RB (lang->nearly_empty_p);
5830 RB (lang->user_align);
5831 RB (lang->has_copy_assign);
5832 RB (lang->has_new);
5833 RB (lang->has_array_new);
5834
5835 v = b () << 0;
5836 v |= b () << 1;
5837 lang->gets_delete = v;
5838 // lang->interface_only
5839 // lang->interface_unknown
5840 lang->interface_unknown = true; // Redetermine interface
5841 RB (lang->contains_empty_class_p);
5842 RB (lang->anon_aggr);
5843 RB (lang->non_zero_init);
5844 RB (lang->empty_p);
5845
5846 RB (lang->vec_new_uses_cookie);
5847 RB (lang->declared_class);
5848 RB (lang->diamond_shaped);
5849 RB (lang->repeated_base);
5850 gcc_assert (!lang->being_defined);
5851 gcc_assert (!lang->debug_requested);
5852 RB (lang->fields_readonly);
5853 RB (lang->ptrmemfunc_flag);
5854
5855 RB (lang->lazy_default_ctor);
5856 RB (lang->lazy_copy_ctor);
5857 RB (lang->lazy_copy_assign);
5858 RB (lang->lazy_destructor);
5859 RB (lang->has_const_copy_ctor);
5860 RB (lang->has_complex_copy_ctor);
5861 RB (lang->has_complex_copy_assign);
5862 RB (lang->non_aggregate);
5863
5864 RB (lang->has_complex_dflt);
5865 RB (lang->has_list_ctor);
5866 RB (lang->non_std_layout);
5867 RB (lang->is_literal);
5868 RB (lang->lazy_move_ctor);
5869 RB (lang->lazy_move_assign);
5870 RB (lang->has_complex_move_ctor);
5871 RB (lang->has_complex_move_assign);
5872
5873 RB (lang->has_constexpr_ctor);
5874 RB (lang->unique_obj_representations);
5875 RB (lang->unique_obj_representations_set);
5876 #undef RB
5877 return !get_overrun ();
5878 }
5879
5880 /* Read & write the core values and pointers. */
5881
5882 void
5883 trees_out::core_vals (tree t)
5884 {
5885 #define WU(X) (u (X))
5886 #define WT(X) (tree_node (X))
5887 tree_code code = TREE_CODE (t);
5888
5889 /* First by shape of the tree. */
5890
5891 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
5892 {
5893 /* Write this early, for better log information. */
5894 WT (t->decl_minimal.name);
5895 if (!DECL_TEMPLATE_PARM_P (t))
5896 WT (t->decl_minimal.context);
5897
5898 if (state)
5899 state->write_location (*this, t->decl_minimal.locus);
5900 }
5901
5902 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
5903 {
5904 /* The only types we write also have TYPE_NON_COMMON. */
5905 gcc_checking_assert (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON));
5906
5907 /* We only stream the main variant. */
5908 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
5909
5910 /* Stream the name & context first, for better log information */
5911 WT (t->type_common.name);
5912 WT (t->type_common.context);
5913
5914 /* By construction we want to make sure we have the canonical
5915 and main variants already in the type table, so emit them
5916 now. */
5917 WT (t->type_common.main_variant);
5918
5919 tree canonical = t->type_common.canonical;
5920 if (canonical && DECL_TEMPLATE_PARM_P (TYPE_NAME (t)))
5921 /* We do not want to wander into different templates.
5922 Reconstructed on stream in. */
5923 canonical = t;
5924 WT (canonical);
5925
5926 /* type_common.next_variant is internally manipulated. */
5927 /* type_common.pointer_to, type_common.reference_to. */
5928
5929 if (streaming_p ())
5930 {
5931 WU (t->type_common.precision);
5932 WU (t->type_common.contains_placeholder_bits);
5933 WU (t->type_common.mode);
5934 WU (t->type_common.align);
5935 }
5936
5937 if (!RECORD_OR_UNION_CODE_P (code))
5938 {
5939 WT (t->type_common.size);
5940 WT (t->type_common.size_unit);
5941 }
5942 WT (t->type_common.attributes);
5943
5944 WT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
5945 }
5946
5947 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5948 {
5949 if (streaming_p ())
5950 {
5951 WU (t->decl_common.mode);
5952 WU (t->decl_common.off_align);
5953 WU (t->decl_common.align);
5954 }
5955
5956 /* For templates these hold instantiation (partial and/or
5957 specialization) information. */
5958 if (code != TEMPLATE_DECL)
5959 {
5960 WT (t->decl_common.size);
5961 WT (t->decl_common.size_unit);
5962 }
5963
5964 WT (t->decl_common.attributes);
5965 // FIXME: Does this introduce cross-decl links? For instance
5966 // from instantiation to the template. If so, we'll need more
5967 // deduplication logic. I think we'll need to walk the blocks
5968 // of the owning function_decl's abstract origin in tandem, to
5969 // generate the locating data needed?
5970 WT (t->decl_common.abstract_origin);
5971 }
5972
5973 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5974 {
5975 WT (t->decl_with_vis.assembler_name);
5976 if (streaming_p ())
5977 WU (t->decl_with_vis.visibility);
5978 }
5979
5980 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
5981 {
5982 /* Records and unions hold FIELDS, VFIELD & BINFO on these
5983 things. */
5984 if (!RECORD_OR_UNION_CODE_P (code) && code != ENUMERAL_TYPE)
5985 {
5986 // FIXME: These are from tpl_parm_value's 'type' writing.
5987 // Perhaps it should just be doing them directly?
5988 gcc_checking_assert (code == TEMPLATE_TYPE_PARM
5989 || code == TEMPLATE_TEMPLATE_PARM
5990 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
5991 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
5992 WT (t->type_non_common.values);
5993 WT (t->type_non_common.maxval);
5994 WT (t->type_non_common.minval);
5995 }
5996
5997 WT (t->type_non_common.lang_1);
5998 }
5999
6000 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6001 {
6002 if (state)
6003 state->write_location (*this, t->exp.locus);
6004
6005 /* Walk in forward order, as (for instance) REQUIRES_EXPR has a
6006 bunch of unscoped parms on its first operand. It's safer to
6007 create those in order. */
6008 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6009 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6010 : TREE_OPERAND_LENGTH (t)),
6011 ix = unsigned (vl); ix != limit; ix++)
6012 WT (TREE_OPERAND (t, ix));
6013 }
6014 else
6015 /* The CODE_CONTAINS tables were inaccurate when I started. */
6016 gcc_checking_assert (TREE_CODE_CLASS (code) != tcc_expression
6017 && TREE_CODE_CLASS (code) != tcc_binary
6018 && TREE_CODE_CLASS (code) != tcc_unary
6019 && TREE_CODE_CLASS (code) != tcc_reference
6020 && TREE_CODE_CLASS (code) != tcc_comparison
6021 && TREE_CODE_CLASS (code) != tcc_statement
6022 && TREE_CODE_CLASS (code) != tcc_vl_exp);
6023
6024 /* Then by CODE. Special cases and/or 1:1 tree shape
6025 correspondance. */
6026 switch (code)
6027 {
6028 default:
6029 break;
6030
6031 case ARGUMENT_PACK_SELECT: /* Transient during instantiation. */
6032 case DEFERRED_PARSE: /* Expanded upon completion of
6033 outermost class. */
6034 case IDENTIFIER_NODE: /* Streamed specially. */
6035 case BINDING_VECTOR: /* Only in namespace-scope symbol
6036 table. */
6037 case SSA_NAME:
6038 case TRANSLATION_UNIT_DECL: /* There is only one, it is a
6039 global_tree. */
6040 case USERDEF_LITERAL: /* Expanded during parsing. */
6041 gcc_unreachable (); /* Should never meet. */
6042
6043 /* Constants. */
6044 case COMPLEX_CST:
6045 WT (TREE_REALPART (t));
6046 WT (TREE_IMAGPART (t));
6047 break;
6048
6049 case FIXED_CST:
6050 gcc_unreachable (); /* Not supported in C++. */
6051
6052 case INTEGER_CST:
6053 if (streaming_p ())
6054 {
6055 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6056 for (unsigned ix = 0; ix != num; ix++)
6057 wu (TREE_INT_CST_ELT (t, ix));
6058 }
6059 break;
6060
6061 case POLY_INT_CST:
6062 gcc_unreachable (); /* Not supported in C++. */
6063
6064 case REAL_CST:
6065 if (streaming_p ())
6066 buf (TREE_REAL_CST_PTR (t), sizeof (real_value));
6067 break;
6068
6069 case STRING_CST:
6070 /* Streamed during start. */
6071 break;
6072
6073 case VECTOR_CST:
6074 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6075 WT (VECTOR_CST_ENCODED_ELT (t, ix));
6076 break;
6077
6078 /* Decls. */
6079 case VAR_DECL:
6080 if (DECL_CONTEXT (t)
6081 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6082 break;
6083 /* FALLTHROUGH */
6084
6085 case RESULT_DECL:
6086 case PARM_DECL:
6087 if (DECL_HAS_VALUE_EXPR_P (t))
6088 WT (DECL_VALUE_EXPR (t));
6089 /* FALLTHROUGH */
6090
6091 case CONST_DECL:
6092 case IMPORTED_DECL:
6093 WT (t->decl_common.initial);
6094 break;
6095
6096 case FIELD_DECL:
6097 WT (t->field_decl.offset);
6098 WT (t->field_decl.bit_field_type);
6099 WT (t->field_decl.qualifier); /* bitfield unit. */
6100 WT (t->field_decl.bit_offset);
6101 WT (t->field_decl.fcontext);
6102 WT (t->decl_common.initial);
6103 break;
6104
6105 case LABEL_DECL:
6106 if (streaming_p ())
6107 {
6108 WU (t->label_decl.label_decl_uid);
6109 WU (t->label_decl.eh_landing_pad_nr);
6110 }
6111 break;
6112
6113 case FUNCTION_DECL:
6114 if (streaming_p ())
6115 {
6116 /* Builtins can be streamed by value when a header declares
6117 them. */
6118 WU (DECL_BUILT_IN_CLASS (t));
6119 if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
6120 WU (DECL_UNCHECKED_FUNCTION_CODE (t));
6121 }
6122
6123 WT (t->function_decl.personality);
6124 WT (t->function_decl.function_specific_target);
6125 WT (t->function_decl.function_specific_optimization);
6126 WT (t->function_decl.vindex);
6127 break;
6128
6129 case USING_DECL:
6130 /* USING_DECL_DECLS */
6131 WT (t->decl_common.initial);
6132 /* FALLTHROUGH */
6133
6134 case TYPE_DECL:
6135 /* USING_DECL: USING_DECL_SCOPE */
6136 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6137 WT (t->decl_non_common.result);
6138 break;
6139
6140 /* Miscellaneous common nodes. */
6141 case BLOCK:
6142 if (state)
6143 {
6144 state->write_location (*this, t->block.locus);
6145 state->write_location (*this, t->block.end_locus);
6146 }
6147
6148 /* DECL_LOCAL_DECL_P decls are first encountered here and
6149 streamed by value. */
6150 chained_decls (t->block.vars);
6151 /* nonlocalized_vars is a middle-end thing. */
6152 WT (t->block.subblocks);
6153 WT (t->block.supercontext);
6154 // FIXME: As for decl's abstract_origin, does this introduce crosslinks?
6155 WT (t->block.abstract_origin);
6156 /* fragment_origin, fragment_chain are middle-end things. */
6157 WT (t->block.chain);
6158 /* nonlocalized_vars, block_num & die are middle endy/debug
6159 things. */
6160 break;
6161
6162 case CALL_EXPR:
6163 if (streaming_p ())
6164 WU (t->base.u.ifn);
6165 break;
6166
6167 case CONSTRUCTOR:
6168 {
6169 unsigned len = vec_safe_length (t->constructor.elts);
6170 if (streaming_p ())
6171 WU (len);
6172 if (len)
6173 for (unsigned ix = 0; ix != len; ix++)
6174 {
6175 const constructor_elt &elt = (*t->constructor.elts)[ix];
6176
6177 WT (elt.index);
6178 WT (elt.value);
6179 }
6180 }
6181 break;
6182
6183 case OMP_CLAUSE:
6184 {
6185 /* The ompcode is serialized in start. */
6186 if (streaming_p ())
6187 WU (t->omp_clause.subcode.map_kind);
6188 if (state)
6189 state->write_location (*this, t->omp_clause.locus);
6190
6191 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6192 for (unsigned ix = 0; ix != len; ix++)
6193 WT (t->omp_clause.ops[ix]);
6194 }
6195 break;
6196
6197 case STATEMENT_LIST:
6198 for (tree_stmt_iterator iter = tsi_start (t);
6199 !tsi_end_p (iter); tsi_next (&iter))
6200 if (tree stmt = tsi_stmt (iter))
6201 WT (stmt);
6202 WT (NULL_TREE);
6203 break;
6204
6205 case OPTIMIZATION_NODE:
6206 case TARGET_OPTION_NODE:
6207 // FIXME: Our representation for these two nodes is a cache of
6208 // the resulting set of options. Not a record of the options
6209 // that got changed by a particular attribute or pragma. Should
6210 // we record that, or should we record the diff from the command
6211 // line options? The latter seems the right behaviour, but is
6212 // (a) harder, and I guess could introduce strangeness if the
6213 // importer has set some incompatible set of optimization flags?
6214 gcc_unreachable ();
6215 break;
6216
6217 case TREE_BINFO:
6218 {
6219 WT (t->binfo.common.chain);
6220 WT (t->binfo.offset);
6221 WT (t->binfo.inheritance);
6222 WT (t->binfo.vptr_field);
6223
6224 WT (t->binfo.vtable);
6225 WT (t->binfo.virtuals);
6226 WT (t->binfo.vtt_subvtt);
6227 WT (t->binfo.vtt_vptr);
6228
6229 tree_vec (BINFO_BASE_ACCESSES (t));
6230 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6231 for (unsigned ix = 0; ix != num; ix++)
6232 WT (BINFO_BASE_BINFO (t, ix));
6233 }
6234 break;
6235
6236 case TREE_LIST:
6237 WT (t->list.purpose);
6238 WT (t->list.value);
6239 WT (t->list.common.chain);
6240 break;
6241
6242 case TREE_VEC:
6243 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6244 WT (TREE_VEC_ELT (t, ix));
6245 /* We stash NON_DEFAULT_TEMPLATE_ARGS_COUNT on TREE_CHAIN! */
6246 gcc_checking_assert (!t->type_common.common.chain
6247 || (TREE_CODE (t->type_common.common.chain)
6248 == INTEGER_CST));
6249 WT (t->type_common.common.chain);
6250 break;
6251
6252 /* C++-specific nodes ... */
6253 case BASELINK:
6254 WT (((lang_tree_node *)t)->baselink.binfo);
6255 WT (((lang_tree_node *)t)->baselink.functions);
6256 WT (((lang_tree_node *)t)->baselink.access_binfo);
6257 break;
6258
6259 case CONSTRAINT_INFO:
6260 WT (((lang_tree_node *)t)->constraint_info.template_reqs);
6261 WT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6262 WT (((lang_tree_node *)t)->constraint_info.associated_constr);
6263 break;
6264
6265 case DEFERRED_NOEXCEPT:
6266 WT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6267 WT (((lang_tree_node *)t)->deferred_noexcept.args);
6268 break;
6269
6270 case LAMBDA_EXPR:
6271 WT (((lang_tree_node *)t)->lambda_expression.capture_list);
6272 WT (((lang_tree_node *)t)->lambda_expression.this_capture);
6273 WT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6274 /* pending_proxies is a parse-time thing. */
6275 gcc_assert (!((lang_tree_node *)t)->lambda_expression.pending_proxies);
6276 if (state)
6277 state->write_location
6278 (*this, ((lang_tree_node *)t)->lambda_expression.locus);
6279 if (streaming_p ())
6280 {
6281 WU (((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6282 WU (((lang_tree_node *)t)->lambda_expression.discriminator);
6283 }
6284 break;
6285
6286 case OVERLOAD:
6287 WT (((lang_tree_node *)t)->overload.function);
6288 WT (t->common.chain);
6289 break;
6290
6291 case PTRMEM_CST:
6292 WT (((lang_tree_node *)t)->ptrmem.member);
6293 break;
6294
6295 case STATIC_ASSERT:
6296 WT (((lang_tree_node *)t)->static_assertion.condition);
6297 WT (((lang_tree_node *)t)->static_assertion.message);
6298 if (state)
6299 state->write_location
6300 (*this, ((lang_tree_node *)t)->static_assertion.location);
6301 break;
6302
6303 case TEMPLATE_DECL:
6304 /* Streamed with the template_decl node itself. */
6305 gcc_checking_assert
6306 (TREE_VISITED (((lang_tree_node *)t)->template_decl.arguments));
6307 gcc_checking_assert
6308 (TREE_VISITED (((lang_tree_node *)t)->template_decl.result)
6309 || dep_hash->find_dependency (t)->is_alias_tmpl_inst ());
6310 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6311 WT (DECL_CHAIN (t));
6312 break;
6313
6314 case TEMPLATE_INFO:
6315 {
6316 WT (((lang_tree_node *)t)->template_info.tmpl);
6317 WT (((lang_tree_node *)t)->template_info.args);
6318
6319 const auto *ac = (((lang_tree_node *)t)
6320 ->template_info.deferred_access_checks);
6321 unsigned len = vec_safe_length (ac);
6322 if (streaming_p ())
6323 u (len);
6324 if (len)
6325 {
6326 for (unsigned ix = 0; ix != len; ix++)
6327 {
6328 const auto &m = (*ac)[ix];
6329 WT (m.binfo);
6330 WT (m.decl);
6331 WT (m.diag_decl);
6332 if (state)
6333 state->write_location (*this, m.loc);
6334 }
6335 }
6336 }
6337 break;
6338
6339 case TEMPLATE_PARM_INDEX:
6340 if (streaming_p ())
6341 {
6342 WU (((lang_tree_node *)t)->tpi.index);
6343 WU (((lang_tree_node *)t)->tpi.level);
6344 WU (((lang_tree_node *)t)->tpi.orig_level);
6345 }
6346 WT (((lang_tree_node *)t)->tpi.decl);
6347 /* TEMPLATE_PARM_DESCENDANTS (AKA TREE_CHAIN) is an internal
6348 cache, do not stream. */
6349 break;
6350
6351 case TRAIT_EXPR:
6352 WT (((lang_tree_node *)t)->trait_expression.type1);
6353 WT (((lang_tree_node *)t)->trait_expression.type2);
6354 if (streaming_p ())
6355 WU (((lang_tree_node *)t)->trait_expression.kind);
6356 break;
6357 }
6358
6359 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6360 {
6361 /* We want to stream the type of a expression-like nodes /after/
6362 we've streamed the operands. The type often contains (bits
6363 of the) types of the operands, and with things like decltype
6364 and noexcept in play, we really want to stream the decls
6365 defining the type before we try and stream the type on its
6366 own. Otherwise we can find ourselves trying to read in a
6367 decl, when we're already partially reading in a component of
6368 its type. And that's bad. */
6369 tree type = t->typed.type;
6370 unsigned prec = 0;
6371
6372 switch (code)
6373 {
6374 default:
6375 break;
6376
6377 case TEMPLATE_DECL:
6378 /* We fill in the template's type separately. */
6379 type = NULL_TREE;
6380 break;
6381
6382 case TYPE_DECL:
6383 if (DECL_ORIGINAL_TYPE (t) && t == TYPE_NAME (type))
6384 /* This is a typedef. We set its type separately. */
6385 type = NULL_TREE;
6386 break;
6387
6388 case ENUMERAL_TYPE:
6389 if (type && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6390 {
6391 /* Type is a restricted range integer type derived from the
6392 integer_types. Find the right one. */
6393 prec = TYPE_PRECISION (type);
6394 tree name = DECL_NAME (TYPE_NAME (type));
6395
6396 for (unsigned itk = itk_none; itk--;)
6397 if (integer_types[itk]
6398 && DECL_NAME (TYPE_NAME (integer_types[itk])) == name)
6399 {
6400 type = integer_types[itk];
6401 break;
6402 }
6403 gcc_assert (type != t->typed.type);
6404 }
6405 break;
6406 }
6407
6408 WT (type);
6409 if (prec && streaming_p ())
6410 WU (prec);
6411 }
6412
6413 #undef WT
6414 #undef WU
6415 }
6416
6417 // Streaming in a reference to a decl can cause that decl to be
6418 // TREE_USED, which is the mark_used behaviour we need most of the
6419 // time. The trees_in::unused can be incremented to inhibit this,
6420 // which is at least needed for vtables.
6421
6422 bool
6423 trees_in::core_vals (tree t)
6424 {
6425 #define RU(X) ((X) = u ())
6426 #define RUC(T,X) ((X) = T (u ()))
6427 #define RT(X) ((X) = tree_node ())
6428 #define RTU(X) ((X) = tree_node (true))
6429 tree_code code = TREE_CODE (t);
6430
6431 /* First by tree shape. */
6432 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
6433 {
6434 RT (t->decl_minimal.name);
6435 if (!DECL_TEMPLATE_PARM_P (t))
6436 RT (t->decl_minimal.context);
6437
6438 /* Don't zap the locus just yet, we don't record it correctly
6439 and thus lose all location information. */
6440 t->decl_minimal.locus = state->read_location (*this);
6441 }
6442
6443 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
6444 {
6445 RT (t->type_common.name);
6446 RT (t->type_common.context);
6447
6448 RT (t->type_common.main_variant);
6449 RT (t->type_common.canonical);
6450
6451 /* type_common.next_variant is internally manipulated. */
6452 /* type_common.pointer_to, type_common.reference_to. */
6453
6454 RU (t->type_common.precision);
6455 RU (t->type_common.contains_placeholder_bits);
6456 RUC (machine_mode, t->type_common.mode);
6457 RU (t->type_common.align);
6458
6459 if (!RECORD_OR_UNION_CODE_P (code))
6460 {
6461 RT (t->type_common.size);
6462 RT (t->type_common.size_unit);
6463 }
6464 RT (t->type_common.attributes);
6465
6466 RT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
6467 }
6468
6469 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
6470 {
6471 RUC (machine_mode, t->decl_common.mode);
6472 RU (t->decl_common.off_align);
6473 RU (t->decl_common.align);
6474
6475 if (code != TEMPLATE_DECL)
6476 {
6477 RT (t->decl_common.size);
6478 RT (t->decl_common.size_unit);
6479 }
6480
6481 RT (t->decl_common.attributes);
6482 RT (t->decl_common.abstract_origin);
6483 }
6484
6485 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
6486 {
6487 RT (t->decl_with_vis.assembler_name);
6488 RUC (symbol_visibility, t->decl_with_vis.visibility);
6489 }
6490
6491 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
6492 {
6493 /* Records and unions hold FIELDS, VFIELD & BINFO on these
6494 things. */
6495 if (!RECORD_OR_UNION_CODE_P (code) && code != ENUMERAL_TYPE)
6496 {
6497 /* This is not clobbering TYPE_CACHED_VALUES, because this
6498 is a type that doesn't have any. */
6499 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
6500 RT (t->type_non_common.values);
6501 RT (t->type_non_common.maxval);
6502 RT (t->type_non_common.minval);
6503 }
6504
6505 RT (t->type_non_common.lang_1);
6506 }
6507
6508 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6509 {
6510 t->exp.locus = state->read_location (*this);
6511
6512 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6513 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6514 : TREE_OPERAND_LENGTH (t)),
6515 ix = unsigned (vl); ix != limit; ix++)
6516 RTU (TREE_OPERAND (t, ix));
6517 }
6518
6519 /* Then by CODE. Special cases and/or 1:1 tree shape
6520 correspondance. */
6521 switch (code)
6522 {
6523 default:
6524 break;
6525
6526 case ARGUMENT_PACK_SELECT:
6527 case DEFERRED_PARSE:
6528 case IDENTIFIER_NODE:
6529 case BINDING_VECTOR:
6530 case SSA_NAME:
6531 case TRANSLATION_UNIT_DECL:
6532 case USERDEF_LITERAL:
6533 return false; /* Should never meet. */
6534
6535 /* Constants. */
6536 case COMPLEX_CST:
6537 RT (TREE_REALPART (t));
6538 RT (TREE_IMAGPART (t));
6539 break;
6540
6541 case FIXED_CST:
6542 /* Not suported in C++. */
6543 return false;
6544
6545 case INTEGER_CST:
6546 {
6547 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6548 for (unsigned ix = 0; ix != num; ix++)
6549 TREE_INT_CST_ELT (t, ix) = wu ();
6550 }
6551 break;
6552
6553 case POLY_INT_CST:
6554 /* Not suported in C++. */
6555 return false;
6556
6557 case REAL_CST:
6558 if (const void *bytes = buf (sizeof (real_value)))
6559 TREE_REAL_CST_PTR (t)
6560 = reinterpret_cast<real_value *> (memcpy (ggc_alloc<real_value> (),
6561 bytes, sizeof (real_value)));
6562 break;
6563
6564 case STRING_CST:
6565 /* Streamed during start. */
6566 break;
6567
6568 case VECTOR_CST:
6569 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6570 RT (VECTOR_CST_ENCODED_ELT (t, ix));
6571 break;
6572
6573 /* Decls. */
6574 case VAR_DECL:
6575 if (DECL_CONTEXT (t)
6576 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6577 break;
6578 /* FALLTHROUGH */
6579
6580 case RESULT_DECL:
6581 case PARM_DECL:
6582 if (DECL_HAS_VALUE_EXPR_P (t))
6583 {
6584 /* The DECL_VALUE hash table is a cache, thus if we're
6585 reading a duplicate (which we end up discarding), the
6586 value expr will also be cleaned up at the next gc. */
6587 tree val = tree_node ();
6588 SET_DECL_VALUE_EXPR (t, val);
6589 }
6590 /* FALLTHROUGH */
6591
6592 case CONST_DECL:
6593 case IMPORTED_DECL:
6594 RT (t->decl_common.initial);
6595 break;
6596
6597 case FIELD_DECL:
6598 RT (t->field_decl.offset);
6599 RT (t->field_decl.bit_field_type);
6600 RT (t->field_decl.qualifier);
6601 RT (t->field_decl.bit_offset);
6602 RT (t->field_decl.fcontext);
6603 RT (t->decl_common.initial);
6604 break;
6605
6606 case LABEL_DECL:
6607 RU (t->label_decl.label_decl_uid);
6608 RU (t->label_decl.eh_landing_pad_nr);
6609 break;
6610
6611 case FUNCTION_DECL:
6612 {
6613 unsigned bltin = u ();
6614 t->function_decl.built_in_class = built_in_class (bltin);
6615 if (bltin != NOT_BUILT_IN)
6616 {
6617 bltin = u ();
6618 DECL_UNCHECKED_FUNCTION_CODE (t) = built_in_function (bltin);
6619 }
6620
6621 RT (t->function_decl.personality);
6622 RT (t->function_decl.function_specific_target);
6623 RT (t->function_decl.function_specific_optimization);
6624 RT (t->function_decl.vindex);
6625 }
6626 break;
6627
6628 case USING_DECL:
6629 /* USING_DECL_DECLS */
6630 RT (t->decl_common.initial);
6631 /* FALLTHROUGH */
6632
6633 case TYPE_DECL:
6634 /* USING_DECL: USING_DECL_SCOPE */
6635 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6636 RT (t->decl_non_common.result);
6637 break;
6638
6639 /* Miscellaneous common nodes. */
6640 case BLOCK:
6641 t->block.locus = state->read_location (*this);
6642 t->block.end_locus = state->read_location (*this);
6643 t->block.vars = chained_decls ();
6644 /* nonlocalized_vars is middle-end. */
6645 RT (t->block.subblocks);
6646 RT (t->block.supercontext);
6647 RT (t->block.abstract_origin);
6648 /* fragment_origin, fragment_chain are middle-end. */
6649 RT (t->block.chain);
6650 /* nonlocalized_vars, block_num, die are middle endy/debug
6651 things. */
6652 break;
6653
6654 case CALL_EXPR:
6655 RUC (internal_fn, t->base.u.ifn);
6656 break;
6657
6658 case CONSTRUCTOR:
6659 if (unsigned len = u ())
6660 {
6661 vec_alloc (t->constructor.elts, len);
6662 for (unsigned ix = 0; ix != len; ix++)
6663 {
6664 constructor_elt elt;
6665
6666 RT (elt.index);
6667 RTU (elt.value);
6668 t->constructor.elts->quick_push (elt);
6669 }
6670 }
6671 break;
6672
6673 case OMP_CLAUSE:
6674 {
6675 RU (t->omp_clause.subcode.map_kind);
6676 t->omp_clause.locus = state->read_location (*this);
6677
6678 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6679 for (unsigned ix = 0; ix != len; ix++)
6680 RT (t->omp_clause.ops[ix]);
6681 }
6682 break;
6683
6684 case STATEMENT_LIST:
6685 {
6686 tree_stmt_iterator iter = tsi_start (t);
6687 for (tree stmt; RT (stmt);)
6688 tsi_link_after (&iter, stmt, TSI_CONTINUE_LINKING);
6689 }
6690 break;
6691
6692 case OPTIMIZATION_NODE:
6693 case TARGET_OPTION_NODE:
6694 /* Not yet implemented, see trees_out::core_vals. */
6695 gcc_unreachable ();
6696 break;
6697
6698 case TREE_BINFO:
6699 RT (t->binfo.common.chain);
6700 RT (t->binfo.offset);
6701 RT (t->binfo.inheritance);
6702 RT (t->binfo.vptr_field);
6703
6704 /* Do not mark the vtables as USED in the address expressions
6705 here. */
6706 unused++;
6707 RT (t->binfo.vtable);
6708 RT (t->binfo.virtuals);
6709 RT (t->binfo.vtt_subvtt);
6710 RT (t->binfo.vtt_vptr);
6711 unused--;
6712
6713 BINFO_BASE_ACCESSES (t) = tree_vec ();
6714 if (!get_overrun ())
6715 {
6716 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6717 for (unsigned ix = 0; ix != num; ix++)
6718 BINFO_BASE_APPEND (t, tree_node ());
6719 }
6720 break;
6721
6722 case TREE_LIST:
6723 RT (t->list.purpose);
6724 RT (t->list.value);
6725 RT (t->list.common.chain);
6726 break;
6727
6728 case TREE_VEC:
6729 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6730 RT (TREE_VEC_ELT (t, ix));
6731 RT (t->type_common.common.chain);
6732 break;
6733
6734 /* C++-specific nodes ... */
6735 case BASELINK:
6736 RT (((lang_tree_node *)t)->baselink.binfo);
6737 RTU (((lang_tree_node *)t)->baselink.functions);
6738 RT (((lang_tree_node *)t)->baselink.access_binfo);
6739 break;
6740
6741 case CONSTRAINT_INFO:
6742 RT (((lang_tree_node *)t)->constraint_info.template_reqs);
6743 RT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6744 RT (((lang_tree_node *)t)->constraint_info.associated_constr);
6745 break;
6746
6747 case DEFERRED_NOEXCEPT:
6748 RT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6749 RT (((lang_tree_node *)t)->deferred_noexcept.args);
6750 break;
6751
6752 case LAMBDA_EXPR:
6753 RT (((lang_tree_node *)t)->lambda_expression.capture_list);
6754 RT (((lang_tree_node *)t)->lambda_expression.this_capture);
6755 RT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6756 /* lambda_expression.pending_proxies is NULL */
6757 ((lang_tree_node *)t)->lambda_expression.locus
6758 = state->read_location (*this);
6759 RUC (cp_lambda_default_capture_mode_type,
6760 ((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6761 RU (((lang_tree_node *)t)->lambda_expression.discriminator);
6762 break;
6763
6764 case OVERLOAD:
6765 RT (((lang_tree_node *)t)->overload.function);
6766 RT (t->common.chain);
6767 break;
6768
6769 case PTRMEM_CST:
6770 RT (((lang_tree_node *)t)->ptrmem.member);
6771 break;
6772
6773 case STATIC_ASSERT:
6774 RT (((lang_tree_node *)t)->static_assertion.condition);
6775 RT (((lang_tree_node *)t)->static_assertion.message);
6776 ((lang_tree_node *)t)->static_assertion.location
6777 = state->read_location (*this);
6778 break;
6779
6780 case TEMPLATE_DECL:
6781 /* Streamed when reading the raw template decl itself. */
6782 gcc_assert (((lang_tree_node *)t)->template_decl.arguments);
6783 gcc_assert (((lang_tree_node *)t)->template_decl.result);
6784 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6785 RT (DECL_CHAIN (t));
6786 break;
6787
6788 case TEMPLATE_INFO:
6789 RT (((lang_tree_node *)t)->template_info.tmpl);
6790 RT (((lang_tree_node *)t)->template_info.args);
6791 if (unsigned len = u ())
6792 {
6793 auto &ac = (((lang_tree_node *)t)
6794 ->template_info.deferred_access_checks);
6795 vec_alloc (ac, len);
6796 for (unsigned ix = 0; ix != len; ix++)
6797 {
6798 deferred_access_check m;
6799
6800 RT (m.binfo);
6801 RT (m.decl);
6802 RT (m.diag_decl);
6803 m.loc = state->read_location (*this);
6804 ac->quick_push (m);
6805 }
6806 }
6807 break;
6808
6809 case TEMPLATE_PARM_INDEX:
6810 RU (((lang_tree_node *)t)->tpi.index);
6811 RU (((lang_tree_node *)t)->tpi.level);
6812 RU (((lang_tree_node *)t)->tpi.orig_level);
6813 RT (((lang_tree_node *)t)->tpi.decl);
6814 break;
6815
6816 case TRAIT_EXPR:
6817 RT (((lang_tree_node *)t)->trait_expression.type1);
6818 RT (((lang_tree_node *)t)->trait_expression.type2);
6819 RUC (cp_trait_kind, ((lang_tree_node *)t)->trait_expression.kind);
6820 break;
6821 }
6822
6823 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6824 {
6825 tree type = tree_node ();
6826
6827 if (type && code == ENUMERAL_TYPE && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6828 {
6829 unsigned precision = u ();
6830
6831 type = build_distinct_type_copy (type);
6832 TYPE_PRECISION (type) = precision;
6833 set_min_and_max_values_for_integral_type (type, precision,
6834 TYPE_SIGN (type));
6835 }
6836
6837 if (code != TEMPLATE_DECL)
6838 t->typed.type = type;
6839 }
6840
6841 #undef RT
6842 #undef RM
6843 #undef RU
6844 return !get_overrun ();
6845 }
6846
6847 void
6848 trees_out::lang_decl_vals (tree t)
6849 {
6850 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
6851 #define WU(X) (u (X))
6852 #define WT(X) (tree_node (X))
6853 /* Module index already written. */
6854 switch (lang->u.base.selector)
6855 {
6856 default:
6857 gcc_unreachable ();
6858
6859 case lds_fn: /* lang_decl_fn. */
6860 if (streaming_p ())
6861 {
6862 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
6863 WU (lang->u.fn.ovl_op_code);
6864 }
6865
6866 if (DECL_CLASS_SCOPE_P (t))
6867 WT (lang->u.fn.context);
6868
6869 if (lang->u.fn.thunk_p)
6870 {
6871 /* The thunked-to function. */
6872 WT (lang->u.fn.befriending_classes);
6873 if (streaming_p ())
6874 wi (lang->u.fn.u5.fixed_offset);
6875 }
6876 else
6877 WT (lang->u.fn.u5.cloned_function);
6878
6879 if (FNDECL_USED_AUTO (t))
6880 WT (lang->u.fn.u.saved_auto_return_type);
6881
6882 goto lds_min;
6883
6884 case lds_decomp: /* lang_decl_decomp. */
6885 WT (lang->u.decomp.base);
6886 goto lds_min;
6887
6888 case lds_min: /* lang_decl_min. */
6889 lds_min:
6890 WT (lang->u.min.template_info);
6891 {
6892 tree access = lang->u.min.access;
6893
6894 /* DECL_ACCESS needs to be maintained by the definition of the
6895 (derived) class that changes the access. The other users
6896 of DECL_ACCESS need to write it here. */
6897 if (!DECL_THUNK_P (t)
6898 && (DECL_CONTEXT (t) && TYPE_P (DECL_CONTEXT (t))))
6899 access = NULL_TREE;
6900
6901 WT (access);
6902 }
6903 break;
6904
6905 case lds_ns: /* lang_decl_ns. */
6906 break;
6907
6908 case lds_parm: /* lang_decl_parm. */
6909 if (streaming_p ())
6910 {
6911 WU (lang->u.parm.level);
6912 WU (lang->u.parm.index);
6913 }
6914 break;
6915 }
6916 #undef WU
6917 #undef WT
6918 }
6919
6920 bool
6921 trees_in::lang_decl_vals (tree t)
6922 {
6923 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
6924 #define RU(X) ((X) = u ())
6925 #define RT(X) ((X) = tree_node ())
6926
6927 /* Module index already read. */
6928 switch (lang->u.base.selector)
6929 {
6930 default:
6931 gcc_unreachable ();
6932
6933 case lds_fn: /* lang_decl_fn. */
6934 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
6935 {
6936 unsigned code = u ();
6937
6938 /* Check consistency. */
6939 if (code >= OVL_OP_MAX
6940 || (ovl_op_info[IDENTIFIER_ASSIGN_OP_P (DECL_NAME (t))][code]
6941 .ovl_op_code) == OVL_OP_ERROR_MARK)
6942 set_overrun ();
6943 else
6944 lang->u.fn.ovl_op_code = code;
6945 }
6946
6947 if (DECL_CLASS_SCOPE_P (t))
6948 RT (lang->u.fn.context);
6949
6950 if (lang->u.fn.thunk_p)
6951 {
6952 RT (lang->u.fn.befriending_classes);
6953 lang->u.fn.u5.fixed_offset = wi ();
6954 }
6955 else
6956 RT (lang->u.fn.u5.cloned_function);
6957
6958 if (FNDECL_USED_AUTO (t))
6959 RT (lang->u.fn.u.saved_auto_return_type);
6960 goto lds_min;
6961
6962 case lds_decomp: /* lang_decl_decomp. */
6963 RT (lang->u.decomp.base);
6964 goto lds_min;
6965
6966 case lds_min: /* lang_decl_min. */
6967 lds_min:
6968 RT (lang->u.min.template_info);
6969 RT (lang->u.min.access);
6970 break;
6971
6972 case lds_ns: /* lang_decl_ns. */
6973 break;
6974
6975 case lds_parm: /* lang_decl_parm. */
6976 RU (lang->u.parm.level);
6977 RU (lang->u.parm.index);
6978 break;
6979 }
6980 #undef RU
6981 #undef RT
6982 return !get_overrun ();
6983 }
6984
6985 /* Most of the value contents of lang_type is streamed in
6986 define_class. */
6987
6988 void
6989 trees_out::lang_type_vals (tree t)
6990 {
6991 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
6992 #define WU(X) (u (X))
6993 #define WT(X) (tree_node (X))
6994 if (streaming_p ())
6995 WU (lang->align);
6996 #undef WU
6997 #undef WT
6998 }
6999
7000 bool
7001 trees_in::lang_type_vals (tree t)
7002 {
7003 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
7004 #define RU(X) ((X) = u ())
7005 #define RT(X) ((X) = tree_node ())
7006 RU (lang->align);
7007 #undef RU
7008 #undef RT
7009 return !get_overrun ();
7010 }
7011
7012 /* Write out the bools of T, including information about any
7013 LANG_SPECIFIC information. Including allocation of any lang
7014 specific object. */
7015
7016 void
7017 trees_out::tree_node_bools (tree t)
7018 {
7019 gcc_checking_assert (streaming_p ());
7020
7021 /* We should never stream a namespace. */
7022 gcc_checking_assert (TREE_CODE (t) != NAMESPACE_DECL
7023 || DECL_NAMESPACE_ALIAS (t));
7024
7025 core_bools (t);
7026
7027 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7028 {
7029 case tcc_declaration:
7030 {
7031 bool specific = DECL_LANG_SPECIFIC (t) != NULL;
7032 b (specific);
7033 if (specific && VAR_P (t))
7034 b (DECL_DECOMPOSITION_P (t));
7035 if (specific)
7036 lang_decl_bools (t);
7037 }
7038 break;
7039
7040 case tcc_type:
7041 {
7042 bool specific = (TYPE_MAIN_VARIANT (t) == t
7043 && TYPE_LANG_SPECIFIC (t) != NULL);
7044 gcc_assert (TYPE_LANG_SPECIFIC (t)
7045 == TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t)));
7046
7047 b (specific);
7048 if (specific)
7049 lang_type_bools (t);
7050 }
7051 break;
7052
7053 default:
7054 break;
7055 }
7056
7057 bflush ();
7058 }
7059
7060 bool
7061 trees_in::tree_node_bools (tree t)
7062 {
7063 bool ok = core_bools (t);
7064
7065 if (ok)
7066 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7067 {
7068 case tcc_declaration:
7069 if (b ())
7070 {
7071 bool decomp = VAR_P (t) && b ();
7072
7073 ok = maybe_add_lang_decl_raw (t, decomp);
7074 if (ok)
7075 ok = lang_decl_bools (t);
7076 }
7077 break;
7078
7079 case tcc_type:
7080 if (b ())
7081 {
7082 ok = maybe_add_lang_type_raw (t);
7083 if (ok)
7084 ok = lang_type_bools (t);
7085 }
7086 break;
7087
7088 default:
7089 break;
7090 }
7091
7092 bflush ();
7093 if (!ok || get_overrun ())
7094 return false;
7095
7096 return true;
7097 }
7098
7099
7100 /* Write out the lang-specifc vals of node T. */
7101
7102 void
7103 trees_out::lang_vals (tree t)
7104 {
7105 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7106 {
7107 case tcc_declaration:
7108 if (DECL_LANG_SPECIFIC (t))
7109 lang_decl_vals (t);
7110 break;
7111
7112 case tcc_type:
7113 if (TYPE_MAIN_VARIANT (t) == t && TYPE_LANG_SPECIFIC (t))
7114 lang_type_vals (t);
7115 break;
7116
7117 default:
7118 break;
7119 }
7120 }
7121
7122 bool
7123 trees_in::lang_vals (tree t)
7124 {
7125 bool ok = true;
7126
7127 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7128 {
7129 case tcc_declaration:
7130 if (DECL_LANG_SPECIFIC (t))
7131 ok = lang_decl_vals (t);
7132 break;
7133
7134 case tcc_type:
7135 if (TYPE_LANG_SPECIFIC (t))
7136 ok = lang_type_vals (t);
7137 else
7138 TYPE_LANG_SPECIFIC (t) = TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t));
7139 break;
7140
7141 default:
7142 break;
7143 }
7144
7145 return ok;
7146 }
7147
7148 /* Write out the value fields of node T. */
7149
7150 void
7151 trees_out::tree_node_vals (tree t)
7152 {
7153 core_vals (t);
7154 lang_vals (t);
7155 }
7156
7157 bool
7158 trees_in::tree_node_vals (tree t)
7159 {
7160 bool ok = core_vals (t);
7161 if (ok)
7162 ok = lang_vals (t);
7163
7164 return ok;
7165 }
7166
7167
7168 /* If T is a back reference, fixed reference or NULL, write out it's
7169 code and return WK_none. Otherwise return WK_value if we must write
7170 by value, or WK_normal otherwise. */
7171
7172 walk_kind
7173 trees_out::ref_node (tree t)
7174 {
7175 if (!t)
7176 {
7177 if (streaming_p ())
7178 {
7179 /* NULL_TREE -> tt_null. */
7180 null_count++;
7181 i (tt_null);
7182 }
7183 return WK_none;
7184 }
7185
7186 if (!TREE_VISITED (t))
7187 return WK_normal;
7188
7189 /* An already-visited tree. It must be in the map. */
7190 int val = get_tag (t);
7191
7192 if (val == tag_value)
7193 /* An entry we should walk into. */
7194 return WK_value;
7195
7196 const char *kind;
7197
7198 if (val <= tag_backref)
7199 {
7200 /* Back reference -> -ve number */
7201 if (streaming_p ())
7202 i (val);
7203 kind = "backref";
7204 }
7205 else if (val >= tag_fixed)
7206 {
7207 /* Fixed reference -> tt_fixed */
7208 val -= tag_fixed;
7209 if (streaming_p ())
7210 i (tt_fixed), u (val);
7211 kind = "fixed";
7212 }
7213
7214 if (streaming_p ())
7215 {
7216 back_ref_count++;
7217 dump (dumper::TREE)
7218 && dump ("Wrote %s:%d %C:%N%S", kind, val, TREE_CODE (t), t, t);
7219 }
7220 return WK_none;
7221 }
7222
7223 tree
7224 trees_in::back_ref (int tag)
7225 {
7226 tree res = NULL_TREE;
7227
7228 if (tag < 0 && unsigned (~tag) < back_refs.length ())
7229 res = back_refs[~tag];
7230
7231 if (!res
7232 /* Checking TREE_CODE is a dereference, so we know this is not a
7233 wild pointer. Checking the code provides evidence we've not
7234 corrupted something. */
7235 || TREE_CODE (res) >= MAX_TREE_CODES)
7236 set_overrun ();
7237 else
7238 dump (dumper::TREE) && dump ("Read backref:%d found %C:%N%S", tag,
7239 TREE_CODE (res), res, res);
7240 return res;
7241 }
7242
7243 unsigned
7244 trees_out::add_indirect_tpl_parms (tree parms)
7245 {
7246 unsigned len = 0;
7247 for (; parms; parms = TREE_CHAIN (parms), len++)
7248 {
7249 if (TREE_VISITED (parms))
7250 break;
7251
7252 int tag = insert (parms);
7253 if (streaming_p ())
7254 dump (dumper::TREE)
7255 && dump ("Indirect:%d template's parameter %u %C:%N",
7256 tag, len, TREE_CODE (parms), parms);
7257 }
7258
7259 if (streaming_p ())
7260 u (len);
7261
7262 return len;
7263 }
7264
7265 unsigned
7266 trees_in::add_indirect_tpl_parms (tree parms)
7267 {
7268 unsigned len = u ();
7269 for (unsigned ix = 0; ix != len; parms = TREE_CHAIN (parms), ix++)
7270 {
7271 int tag = insert (parms);
7272 dump (dumper::TREE)
7273 && dump ("Indirect:%d template's parameter %u %C:%N",
7274 tag, ix, TREE_CODE (parms), parms);
7275 }
7276
7277 return len;
7278 }
7279
7280 /* We've just found DECL by name. Insert nodes that come with it, but
7281 cannot be found by name, so we'll not accidentally walk into them. */
7282
7283 void
7284 trees_out::add_indirects (tree decl)
7285 {
7286 unsigned count = 0;
7287
7288 // FIXME:OPTIMIZATION We'll eventually want default fn parms of
7289 // templates and perhaps default template parms too. The former can
7290 // be referenced from instantiations (as they are lazily
7291 // instantiated). Also (deferred?) exception specifications of
7292 // templates. See the note about PARM_DECLs in trees_out::decl_node.
7293 tree inner = decl;
7294 if (TREE_CODE (decl) == TEMPLATE_DECL)
7295 {
7296 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7297
7298 inner = DECL_TEMPLATE_RESULT (decl);
7299 int tag = insert (inner);
7300 if (streaming_p ())
7301 dump (dumper::TREE)
7302 && dump ("Indirect:%d template's result %C:%N",
7303 tag, TREE_CODE (inner), inner);
7304 count++;
7305 }
7306
7307 if (TREE_CODE (inner) == TYPE_DECL)
7308 {
7309 /* Make sure the type is in the map too. Otherwise we get
7310 different RECORD_TYPEs for the same type, and things go
7311 south. */
7312 tree type = TREE_TYPE (inner);
7313 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7314 || TYPE_NAME (type) == inner);
7315 int tag = insert (type);
7316 if (streaming_p ())
7317 dump (dumper::TREE) && dump ("Indirect:%d decl's type %C:%N", tag,
7318 TREE_CODE (type), type);
7319 count++;
7320 }
7321
7322 if (streaming_p ())
7323 {
7324 u (count);
7325 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7326 }
7327 }
7328
7329 bool
7330 trees_in::add_indirects (tree decl)
7331 {
7332 unsigned count = 0;
7333
7334 tree inner = decl;
7335 if (TREE_CODE (inner) == TEMPLATE_DECL)
7336 {
7337 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7338
7339 inner = DECL_TEMPLATE_RESULT (decl);
7340 int tag = insert (inner);
7341 dump (dumper::TREE)
7342 && dump ("Indirect:%d templates's result %C:%N", tag,
7343 TREE_CODE (inner), inner);
7344 count++;
7345 }
7346
7347 if (TREE_CODE (inner) == TYPE_DECL)
7348 {
7349 tree type = TREE_TYPE (inner);
7350 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7351 || TYPE_NAME (type) == inner);
7352 int tag = insert (type);
7353 dump (dumper::TREE)
7354 && dump ("Indirect:%d decl's type %C:%N", tag, TREE_CODE (type), type);
7355 count++;
7356 }
7357
7358 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7359 return count == u ();
7360 }
7361
7362 /* Stream a template parameter. There are 4.5 kinds of parameter:
7363 a) Template - TEMPLATE_DECL->TYPE_DECL->TEMPLATE_TEMPLATE_PARM
7364 TEMPLATE_TYPE_PARM_INDEX TPI
7365 b) Type - TYPE_DECL->TEMPLATE_TYPE_PARM TEMPLATE_TYPE_PARM_INDEX TPI
7366 c.1) NonTYPE - PARM_DECL DECL_INITIAL TPI We meet this first
7367 c.2) NonTYPE - CONST_DECL DECL_INITIAL Same TPI
7368 d) BoundTemplate - TYPE_DECL->BOUND_TEMPLATE_TEMPLATE_PARM
7369 TEMPLATE_TYPE_PARM_INDEX->TPI
7370 TEMPLATE_TEMPLATE_PARM_INFO->TEMPLATE_INFO
7371
7372 All of these point to a TEMPLATE_PARM_INDEX, and #B also has a TEMPLATE_INFO
7373 */
7374
7375 void
7376 trees_out::tpl_parm_value (tree parm)
7377 {
7378 gcc_checking_assert (DECL_P (parm) && DECL_TEMPLATE_PARM_P (parm));
7379
7380 int parm_tag = insert (parm);
7381 if (streaming_p ())
7382 {
7383 i (tt_tpl_parm);
7384 dump (dumper::TREE) && dump ("Writing template parm:%d %C:%N",
7385 parm_tag, TREE_CODE (parm), parm);
7386 start (parm);
7387 tree_node_bools (parm);
7388 }
7389
7390 tree inner = parm;
7391 if (TREE_CODE (inner) == TEMPLATE_DECL)
7392 {
7393 inner = DECL_TEMPLATE_RESULT (inner);
7394 int inner_tag = insert (inner);
7395 if (streaming_p ())
7396 {
7397 dump (dumper::TREE) && dump ("Writing inner template parm:%d %C:%N",
7398 inner_tag, TREE_CODE (inner), inner);
7399 start (inner);
7400 tree_node_bools (inner);
7401 }
7402 }
7403
7404 tree type = NULL_TREE;
7405 if (TREE_CODE (inner) == TYPE_DECL)
7406 {
7407 type = TREE_TYPE (inner);
7408 int type_tag = insert (type);
7409 if (streaming_p ())
7410 {
7411 dump (dumper::TREE) && dump ("Writing template parm type:%d %C:%N",
7412 type_tag, TREE_CODE (type), type);
7413 start (type);
7414 tree_node_bools (type);
7415 }
7416 }
7417
7418 if (inner != parm)
7419 {
7420 /* This is a template-template parameter. */
7421 unsigned tpl_levels = 0;
7422 tpl_header (parm, &tpl_levels);
7423 tpl_parms_fini (parm, tpl_levels);
7424 }
7425
7426 tree_node_vals (parm);
7427 if (inner != parm)
7428 tree_node_vals (inner);
7429 if (type)
7430 {
7431 tree_node_vals (type);
7432 if (DECL_NAME (inner) == auto_identifier
7433 || DECL_NAME (inner) == decltype_auto_identifier)
7434 {
7435 /* Placeholder auto. */
7436 tree_node (DECL_INITIAL (inner));
7437 tree_node (DECL_SIZE_UNIT (inner));
7438 }
7439 }
7440
7441 if (streaming_p ())
7442 dump (dumper::TREE) && dump ("Wrote template parm:%d %C:%N",
7443 parm_tag, TREE_CODE (parm), parm);
7444 }
7445
7446 tree
7447 trees_in::tpl_parm_value ()
7448 {
7449 tree parm = start ();
7450 if (!parm || !tree_node_bools (parm))
7451 return NULL_TREE;
7452
7453 int parm_tag = insert (parm);
7454 dump (dumper::TREE) && dump ("Reading template parm:%d %C:%N",
7455 parm_tag, TREE_CODE (parm), parm);
7456
7457 tree inner = parm;
7458 if (TREE_CODE (inner) == TEMPLATE_DECL)
7459 {
7460 inner = start ();
7461 if (!inner || !tree_node_bools (inner))
7462 return NULL_TREE;
7463 int inner_tag = insert (inner);
7464 dump (dumper::TREE) && dump ("Reading inner template parm:%d %C:%N",
7465 inner_tag, TREE_CODE (inner), inner);
7466 DECL_TEMPLATE_RESULT (parm) = inner;
7467 }
7468
7469 tree type = NULL_TREE;
7470 if (TREE_CODE (inner) == TYPE_DECL)
7471 {
7472 type = start ();
7473 if (!type || !tree_node_bools (type))
7474 return NULL_TREE;
7475 int type_tag = insert (type);
7476 dump (dumper::TREE) && dump ("Reading template parm type:%d %C:%N",
7477 type_tag, TREE_CODE (type), type);
7478
7479 TREE_TYPE (inner) = TREE_TYPE (parm) = type;
7480 TYPE_NAME (type) = parm;
7481 }
7482
7483 if (inner != parm)
7484 {
7485 /* A template template parameter. */
7486 unsigned tpl_levels = 0;
7487 tpl_header (parm, &tpl_levels);
7488 tpl_parms_fini (parm, tpl_levels);
7489 }
7490
7491 tree_node_vals (parm);
7492 if (inner != parm)
7493 tree_node_vals (inner);
7494 if (type)
7495 {
7496 tree_node_vals (type);
7497 if (DECL_NAME (inner) == auto_identifier
7498 || DECL_NAME (inner) == decltype_auto_identifier)
7499 {
7500 /* Placeholder auto. */
7501 DECL_INITIAL (inner) = tree_node ();
7502 DECL_SIZE_UNIT (inner) = tree_node ();
7503 }
7504 if (TYPE_CANONICAL (type))
7505 {
7506 gcc_checking_assert (TYPE_CANONICAL (type) == type);
7507 TYPE_CANONICAL (type) = canonical_type_parameter (type);
7508 }
7509 }
7510
7511 dump (dumper::TREE) && dump ("Read template parm:%d %C:%N",
7512 parm_tag, TREE_CODE (parm), parm);
7513
7514 return parm;
7515 }
7516
7517 void
7518 trees_out::install_entity (tree decl, depset *dep)
7519 {
7520 gcc_checking_assert (streaming_p ());
7521
7522 /* Write the entity index, so we can insert it as soon as we
7523 know this is new. */
7524 u (dep ? dep->cluster + 1 : 0);
7525 if (CHECKING_P && dep)
7526 {
7527 /* Add it to the entity map, such that we can tell it is
7528 part of us. */
7529 bool existed;
7530 unsigned *slot = &entity_map->get_or_insert
7531 (DECL_UID (decl), &existed);
7532 if (existed)
7533 /* If it existed, it should match. */
7534 gcc_checking_assert (decl == (*entity_ary)[*slot]);
7535 *slot = ~dep->cluster;
7536 }
7537 }
7538
7539 bool
7540 trees_in::install_entity (tree decl)
7541 {
7542 unsigned entity_index = u ();
7543 if (!entity_index)
7544 return false;
7545
7546 if (entity_index > state->entity_num)
7547 {
7548 set_overrun ();
7549 return false;
7550 }
7551
7552 /* Insert the real decl into the entity ary. */
7553 unsigned ident = state->entity_lwm + entity_index - 1;
7554 binding_slot &elt = (*entity_ary)[ident];
7555
7556 /* See module_state::read_pendings for how this got set. */
7557 int pending = elt.get_lazy () & 3;
7558
7559 elt = decl;
7560
7561 /* And into the entity map, if it's not already there. */
7562 if (!DECL_LANG_SPECIFIC (decl)
7563 || !DECL_MODULE_ENTITY_P (decl))
7564 {
7565 retrofit_lang_decl (decl);
7566 DECL_MODULE_ENTITY_P (decl) = true;
7567
7568 /* Insert into the entity hash (it cannot already be there). */
7569 bool existed;
7570 unsigned &slot = entity_map->get_or_insert (DECL_UID (decl), &existed);
7571 gcc_checking_assert (!existed);
7572 slot = ident;
7573 }
7574 else if (pending != 0)
7575 {
7576 unsigned key_ident = import_entity_index (decl);
7577 if (pending & 1)
7578 if (!pending_table->add (key_ident, ~ident))
7579 pending &= ~1;
7580
7581 if (pending & 2)
7582 if (!pending_table->add (~key_ident, ~ident))
7583 pending &= ~2;
7584 }
7585
7586 if (pending & 1)
7587 DECL_MODULE_PENDING_SPECIALIZATIONS_P (decl) = true;
7588
7589 if (pending & 2)
7590 {
7591 DECL_MODULE_PENDING_MEMBERS_P (decl) = true;
7592 gcc_checking_assert (TREE_CODE (decl) != TEMPLATE_DECL);
7593 }
7594
7595 return true;
7596 }
7597
7598 static bool has_definition (tree decl);
7599
7600 /* DECL is a decl node that must be written by value. DEP is the
7601 decl's depset. */
7602
7603 void
7604 trees_out::decl_value (tree decl, depset *dep)
7605 {
7606 /* We should not be writing clones or template parms. */
7607 gcc_checking_assert (DECL_P (decl)
7608 && !DECL_CLONED_FUNCTION_P (decl)
7609 && !DECL_TEMPLATE_PARM_P (decl));
7610
7611 /* We should never be writing non-typedef ptrmemfuncs by value. */
7612 gcc_checking_assert (TREE_CODE (decl) != TYPE_DECL
7613 || DECL_ORIGINAL_TYPE (decl)
7614 || !TYPE_PTRMEMFUNC_P (TREE_TYPE (decl)));
7615
7616 merge_kind mk = get_merge_kind (decl, dep);
7617
7618 if (CHECKING_P)
7619 {
7620 /* Never start in the middle of a template. */
7621 int use_tpl = -1;
7622 if (tree ti = node_template_info (decl, use_tpl))
7623 gcc_checking_assert (TREE_CODE (TI_TEMPLATE (ti)) == OVERLOAD
7624 || TREE_CODE (TI_TEMPLATE (ti)) == FIELD_DECL
7625 || (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti))
7626 != decl));
7627 }
7628
7629 if (streaming_p ())
7630 {
7631 /* A new node -> tt_decl. */
7632 decl_val_count++;
7633 i (tt_decl);
7634 u (mk);
7635 start (decl);
7636
7637 if (mk != MK_unique)
7638 {
7639 if (!(mk & MK_template_mask) && !state->is_header ())
7640 {
7641 /* Tell the importer whether this is a global module entity,
7642 or a module entity. This bool merges into the next block
7643 of bools. Sneaky. */
7644 tree o = get_originating_module_decl (decl);
7645 bool is_mod = false;
7646
7647 if (dep && dep->is_alias_tmpl_inst ())
7648 /* Alias template instantiations are templatey, but
7649 found by name. */
7650 is_mod = false;
7651 else if (DECL_LANG_SPECIFIC (o) && DECL_MODULE_PURVIEW_P (o))
7652 is_mod = true;
7653 b (is_mod);
7654 }
7655 b (dep && dep->has_defn ());
7656 }
7657 tree_node_bools (decl);
7658 }
7659
7660 int tag = insert (decl, WK_value);
7661 if (streaming_p ())
7662 dump (dumper::TREE)
7663 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], tag,
7664 TREE_CODE (decl), decl, decl);
7665
7666 tree inner = decl;
7667 int inner_tag = 0;
7668 if (TREE_CODE (decl) == TEMPLATE_DECL)
7669 {
7670 if (dep && dep->is_alias_tmpl_inst ())
7671 inner = NULL_TREE;
7672 else
7673 {
7674 inner = DECL_TEMPLATE_RESULT (decl);
7675 inner_tag = insert (inner, WK_value);
7676 }
7677
7678 if (streaming_p ())
7679 {
7680 int code = inner ? TREE_CODE (inner) : 0;
7681 u (code);
7682 if (inner)
7683 {
7684 start (inner, true);
7685 tree_node_bools (inner);
7686 dump (dumper::TREE)
7687 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], inner_tag,
7688 TREE_CODE (inner), inner, inner);
7689 }
7690 }
7691 }
7692
7693 tree type = NULL_TREE;
7694 int type_tag = 0;
7695 tree stub_decl = NULL_TREE;
7696 int stub_tag = 0;
7697 if (inner && TREE_CODE (inner) == TYPE_DECL)
7698 {
7699 type = TREE_TYPE (inner);
7700 bool has_type = (type == TYPE_MAIN_VARIANT (type)
7701 && TYPE_NAME (type) == inner);
7702
7703 if (streaming_p ())
7704 u (has_type ? TREE_CODE (type) : 0);
7705
7706 if (has_type)
7707 {
7708 type_tag = insert (type, WK_value);
7709 if (streaming_p ())
7710 {
7711 start (type, true);
7712 tree_node_bools (type);
7713 dump (dumper::TREE)
7714 && dump ("Writing type:%d %C:%N", type_tag,
7715 TREE_CODE (type), type);
7716 }
7717
7718 stub_decl = TYPE_STUB_DECL (type);
7719 bool has_stub = inner != stub_decl;
7720 if (streaming_p ())
7721 u (has_stub ? TREE_CODE (stub_decl) : 0);
7722 if (has_stub)
7723 {
7724 stub_tag = insert (stub_decl);
7725 if (streaming_p ())
7726 {
7727 start (stub_decl, true);
7728 tree_node_bools (stub_decl);
7729 dump (dumper::TREE)
7730 && dump ("Writing stub_decl:%d %C:%N", stub_tag,
7731 TREE_CODE (stub_decl), stub_decl);
7732 }
7733 }
7734 else
7735 stub_decl = NULL_TREE;
7736 }
7737 else
7738 /* Regular typedef. */
7739 type = NULL_TREE;
7740 }
7741
7742 /* Stream the container, we want it correctly canonicalized before
7743 we start emitting keys for this decl. */
7744 tree container = decl_container (decl);
7745
7746 unsigned tpl_levels = 0;
7747 if (decl != inner)
7748 tpl_header (decl, &tpl_levels);
7749 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
7750 fn_parms_init (inner);
7751
7752 /* Now write out the merging information, and then really
7753 install the tag values. */
7754 key_mergeable (tag, mk, decl, inner, container, dep);
7755
7756 if (streaming_p ())
7757 dump (dumper::MERGE)
7758 && dump ("Wrote:%d's %s merge key %C:%N", tag,
7759 merge_kind_name[mk], TREE_CODE (decl), decl);
7760
7761 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
7762 fn_parms_fini (inner);
7763
7764 if (!is_key_order ())
7765 tree_node_vals (decl);
7766
7767 if (inner_tag)
7768 {
7769 if (!is_key_order ())
7770 tree_node_vals (inner);
7771 tpl_parms_fini (decl, tpl_levels);
7772 }
7773 else if (!inner)
7774 {
7775 /* A template alias instantiation. */
7776 inner = DECL_TEMPLATE_RESULT (decl);
7777 if (!is_key_order ())
7778 tree_node (inner);
7779 if (streaming_p ())
7780 dump (dumper::TREE)
7781 && dump ("Wrote(%d) alias template %C:%N",
7782 get_tag (inner), TREE_CODE (inner), inner);
7783 inner = NULL_TREE;
7784 }
7785
7786 if (type && !is_key_order ())
7787 {
7788 tree_node_vals (type);
7789 if (stub_decl)
7790 tree_node_vals (stub_decl);
7791 }
7792
7793 if (!is_key_order ())
7794 tree_node (get_constraints (decl));
7795
7796 if (streaming_p ())
7797 {
7798 /* Do not stray outside this section. */
7799 gcc_checking_assert (!dep || dep->section == dep_hash->section);
7800
7801 /* Write the entity index, so we can insert it as soon as we
7802 know this is new. */
7803 install_entity (decl, dep);
7804 }
7805
7806 if (inner
7807 && VAR_OR_FUNCTION_DECL_P (inner)
7808 && DECL_LANG_SPECIFIC (inner)
7809 && DECL_MODULE_ATTACHMENTS_P (inner)
7810 && !is_key_order ())
7811 {
7812 /* Stream the attached entities. */
7813 attachset *set = attached_table->get (DECL_UID (inner));
7814 unsigned num = set->num;
7815 if (streaming_p ())
7816 u (num);
7817 for (unsigned ix = 0; ix != num; ix++)
7818 {
7819 tree attached = set->values[ix];
7820 tree_node (attached);
7821 if (streaming_p ())
7822 dump (dumper::MERGE)
7823 && dump ("Written %d[%u] attached decl %N", tag, ix, attached);
7824 }
7825 }
7826
7827 bool is_typedef = (!type && inner
7828 && TREE_CODE (inner) == TYPE_DECL
7829 && DECL_ORIGINAL_TYPE (inner)
7830 && TYPE_NAME (TREE_TYPE (inner)) == inner);
7831 if (is_typedef)
7832 {
7833 /* A typedef type. */
7834 int type_tag = insert (TREE_TYPE (inner));
7835 if (streaming_p ())
7836 dump (dumper::TREE)
7837 && dump ("Cloned:%d typedef %C:%N", type_tag,
7838 TREE_CODE (TREE_TYPE (inner)), TREE_TYPE (inner));
7839 }
7840
7841 if (streaming_p () && DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
7842 {
7843 bool cloned_p
7844 = (DECL_CHAIN (decl) && DECL_CLONED_FUNCTION_P (DECL_CHAIN (decl)));
7845 bool needs_vtt_parm_p
7846 = (cloned_p && CLASSTYPE_VBASECLASSES (DECL_CONTEXT (decl)));
7847 bool omit_inherited_parms_p
7848 = (cloned_p && DECL_MAYBE_IN_CHARGE_CONSTRUCTOR_P (decl)
7849 && base_ctor_omit_inherited_parms (decl));
7850 unsigned flags = (int (cloned_p) << 0
7851 | int (needs_vtt_parm_p) << 1
7852 | int (omit_inherited_parms_p) << 2);
7853 u (flags);
7854 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
7855 decl, cloned_p ? "" : "not ");
7856 }
7857
7858 if (streaming_p ())
7859 dump (dumper::TREE) && dump ("Written decl:%d %C:%N", tag,
7860 TREE_CODE (decl), decl);
7861
7862 if (!inner || NAMESPACE_SCOPE_P (inner))
7863 gcc_checking_assert (!inner
7864 || !dep == (VAR_OR_FUNCTION_DECL_P (inner)
7865 && DECL_LOCAL_DECL_P (inner)));
7866 else if ((TREE_CODE (inner) == TYPE_DECL
7867 && TYPE_NAME (TREE_TYPE (inner)) == inner
7868 && !is_typedef)
7869 || TREE_CODE (inner) == FUNCTION_DECL)
7870 {
7871 bool write_defn = !dep && has_definition (decl);
7872 if (streaming_p ())
7873 u (write_defn);
7874 if (write_defn)
7875 write_definition (decl);
7876 }
7877 }
7878
7879 tree
7880 trees_in::decl_value ()
7881 {
7882 int tag = 0;
7883 bool is_mod = false;
7884 bool has_defn = false;
7885 unsigned mk_u = u ();
7886 if (mk_u >= MK_hwm || !merge_kind_name[mk_u])
7887 {
7888 set_overrun ();
7889 return NULL_TREE;
7890 }
7891
7892 unsigned saved_unused = unused;
7893 unused = 0;
7894
7895 merge_kind mk = merge_kind (mk_u);
7896
7897 tree decl = start ();
7898 if (decl)
7899 {
7900 if (mk != MK_unique)
7901 {
7902 if (!(mk & MK_template_mask) && !state->is_header ())
7903 /* See note in trees_out about where this bool is sequenced. */
7904 is_mod = b ();
7905
7906 has_defn = b ();
7907 }
7908
7909 if (!tree_node_bools (decl))
7910 decl = NULL_TREE;
7911 }
7912
7913 /* Insert into map. */
7914 tag = insert (decl);
7915 if (decl)
7916 dump (dumper::TREE)
7917 && dump ("Reading:%d %C", tag, TREE_CODE (decl));
7918
7919 tree inner = decl;
7920 int inner_tag = 0;
7921 if (decl && TREE_CODE (decl) == TEMPLATE_DECL)
7922 {
7923 int code = u ();
7924 if (!code)
7925 {
7926 inner = NULL_TREE;
7927 DECL_TEMPLATE_RESULT (decl) = error_mark_node;
7928 }
7929 else
7930 {
7931 inner = start (code);
7932 if (inner && tree_node_bools (inner))
7933 DECL_TEMPLATE_RESULT (decl) = inner;
7934 else
7935 decl = NULL_TREE;
7936
7937 inner_tag = insert (inner);
7938 if (decl)
7939 dump (dumper::TREE)
7940 && dump ("Reading:%d %C", inner_tag, TREE_CODE (inner));
7941 }
7942 }
7943
7944 tree type = NULL_TREE;
7945 int type_tag = 0;
7946 tree stub_decl = NULL_TREE;
7947 int stub_tag = 0;
7948 if (decl && inner && TREE_CODE (inner) == TYPE_DECL)
7949 {
7950 if (unsigned type_code = u ())
7951 {
7952 type = start (type_code);
7953 if (type && tree_node_bools (type))
7954 {
7955 TREE_TYPE (inner) = type;
7956 TYPE_NAME (type) = inner;
7957 }
7958 else
7959 decl = NULL_TREE;
7960
7961 type_tag = insert (type);
7962 if (decl)
7963 dump (dumper::TREE)
7964 && dump ("Reading type:%d %C", type_tag, TREE_CODE (type));
7965
7966 if (unsigned stub_code = u ())
7967 {
7968 stub_decl = start (stub_code);
7969 if (stub_decl && tree_node_bools (stub_decl))
7970 {
7971 TREE_TYPE (stub_decl) = type;
7972 TYPE_STUB_DECL (type) = stub_decl;
7973 }
7974 else
7975 decl = NULL_TREE;
7976
7977 stub_tag = insert (stub_decl);
7978 if (decl)
7979 dump (dumper::TREE)
7980 && dump ("Reading stub_decl:%d %C", stub_tag,
7981 TREE_CODE (stub_decl));
7982 }
7983 }
7984 }
7985
7986 if (!decl)
7987 {
7988 bail:
7989 if (inner_tag != 0)
7990 back_refs[~inner_tag] = NULL_TREE;
7991 if (type_tag != 0)
7992 back_refs[~type_tag] = NULL_TREE;
7993 if (stub_tag != 0)
7994 back_refs[~stub_tag] = NULL_TREE;
7995 if (tag != 0)
7996 back_refs[~tag] = NULL_TREE;
7997 set_overrun ();
7998 /* Bail. */
7999 unused = saved_unused;
8000 return NULL_TREE;
8001 }
8002
8003 /* Read the container, to ensure it's already been streamed in. */
8004 tree container = decl_container ();
8005 unsigned tpl_levels = 0;
8006
8007 /* Figure out if this decl is already known about. */
8008 int parm_tag = 0;
8009
8010 if (decl != inner)
8011 if (!tpl_header (decl, &tpl_levels))
8012 goto bail;
8013 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
8014 parm_tag = fn_parms_init (inner);
8015
8016 tree existing = key_mergeable (tag, mk, decl, inner, type, container, is_mod);
8017 tree existing_inner = existing;
8018 if (existing)
8019 {
8020 if (existing == error_mark_node)
8021 goto bail;
8022
8023 if (TREE_CODE (STRIP_TEMPLATE (existing)) == TYPE_DECL)
8024 {
8025 tree etype = TREE_TYPE (existing);
8026 if (TYPE_LANG_SPECIFIC (etype)
8027 && COMPLETE_TYPE_P (etype)
8028 && !CLASSTYPE_MEMBER_VEC (etype))
8029 /* Give it a member vec, we're likely gonna be looking
8030 inside it. */
8031 set_class_bindings (etype, -1);
8032 }
8033
8034 /* Install the existing decl into the back ref array. */
8035 register_duplicate (decl, existing);
8036 back_refs[~tag] = existing;
8037 if (inner_tag != 0)
8038 {
8039 existing_inner = DECL_TEMPLATE_RESULT (existing);
8040 back_refs[~inner_tag] = existing_inner;
8041 }
8042
8043 if (type_tag != 0)
8044 {
8045 tree existing_type = TREE_TYPE (existing);
8046 back_refs[~type_tag] = existing_type;
8047 if (stub_tag != 0)
8048 back_refs[~stub_tag] = TYPE_STUB_DECL (existing_type);
8049 }
8050 }
8051
8052 if (parm_tag)
8053 fn_parms_fini (parm_tag, inner, existing_inner, has_defn);
8054
8055 if (!tree_node_vals (decl))
8056 goto bail;
8057
8058 if (inner_tag)
8059 {
8060 gcc_checking_assert (DECL_TEMPLATE_RESULT (decl) == inner);
8061
8062 if (!tree_node_vals (inner))
8063 goto bail;
8064
8065 if (!tpl_parms_fini (decl, tpl_levels))
8066 goto bail;
8067 }
8068 else if (!inner)
8069 {
8070 inner = tree_node ();
8071 DECL_TEMPLATE_RESULT (decl) = inner;
8072 TREE_TYPE (decl) = TREE_TYPE (inner);
8073 dump (dumper::TREE)
8074 && dump ("Read alias template %C:%N", TREE_CODE (inner), inner);
8075 inner = NULL_TREE;
8076 }
8077
8078 if (type && (!tree_node_vals (type)
8079 || (stub_decl && !tree_node_vals (stub_decl))))
8080 goto bail;
8081
8082 tree constraints = tree_node ();
8083
8084 dump (dumper::TREE) && dump ("Read:%d %C:%N", tag, TREE_CODE (decl), decl);
8085
8086 /* Regular typedefs will have a NULL TREE_TYPE at this point. */
8087 bool is_typedef = (!type && inner
8088 && TREE_CODE (inner) == TYPE_DECL
8089 && DECL_ORIGINAL_TYPE (inner)
8090 && !TREE_TYPE (inner));
8091 if (is_typedef)
8092 {
8093 /* Frob it to be ready for cloning. */
8094 TREE_TYPE (inner) = DECL_ORIGINAL_TYPE (inner);
8095 DECL_ORIGINAL_TYPE (inner) = NULL_TREE;
8096 }
8097
8098 existing = back_refs[~tag];
8099 bool installed = install_entity (existing);
8100 bool is_new = existing == decl;
8101
8102 if (inner
8103 && VAR_OR_FUNCTION_DECL_P (inner)
8104 && DECL_LANG_SPECIFIC (inner)
8105 && DECL_MODULE_ATTACHMENTS_P (inner))
8106 {
8107 /* Read and maybe install the attached entities. */
8108 attachset *set
8109 = attached_table->get (DECL_UID (STRIP_TEMPLATE (existing)));
8110 unsigned num = u ();
8111 if (!is_new == !set)
8112 set_overrun ();
8113 if (is_new)
8114 set = attached_table->create (DECL_UID (inner), num, NULL_TREE);
8115 for (unsigned ix = 0; !get_overrun () && ix != num; ix++)
8116 {
8117 tree attached = tree_node ();
8118 dump (dumper::MERGE)
8119 && dump ("Read %d[%u] %s attached decl %N", tag, ix,
8120 is_new ? "new" : "matched", attached);
8121 if (is_new)
8122 set->values[ix] = attached;
8123 else if (set->values[ix] != attached)
8124 set_overrun ();
8125 }
8126 }
8127
8128 if (is_new)
8129 {
8130 /* A newly discovered node. */
8131 if (TREE_CODE (decl) == FUNCTION_DECL && DECL_VIRTUAL_P (decl))
8132 /* Mark this identifier as naming a virtual function --
8133 lookup_overrides relies on this optimization. */
8134 IDENTIFIER_VIRTUAL_P (DECL_NAME (decl)) = true;
8135
8136 if (installed)
8137 {
8138 /* Mark the entity as imported and add it to the entity
8139 array and map. */
8140 retrofit_lang_decl (decl);
8141 DECL_MODULE_IMPORT_P (decl) = true;
8142 if (inner_tag)
8143 {
8144 retrofit_lang_decl (inner);
8145 DECL_MODULE_IMPORT_P (inner) = true;
8146 }
8147 }
8148
8149 if (constraints)
8150 set_constraints (decl, constraints);
8151
8152 if (TREE_CODE (decl) == INTEGER_CST && !TREE_OVERFLOW (decl))
8153 {
8154 decl = cache_integer_cst (decl, true);
8155 back_refs[~tag] = decl;
8156 }
8157
8158 if (is_typedef)
8159 set_underlying_type (inner);
8160
8161 if (inner_tag)
8162 /* Set the TEMPLATE_DECL's type. */
8163 TREE_TYPE (decl) = TREE_TYPE (inner);
8164
8165 /* The late insertion of an alias here or an implicit member
8166 (next block), is ok, because we ensured that all imports were
8167 loaded up before we started this cluster. Thus an insertion
8168 from some other import cannot have happened between the
8169 merged insertion above and these insertions down here. */
8170 if (mk == MK_alias_spec)
8171 {
8172 /* Insert into type table. */
8173 tree ti = DECL_TEMPLATE_INFO (inner);
8174 spec_entry elt =
8175 {TI_TEMPLATE (ti), TI_ARGS (ti), TREE_TYPE (inner)};
8176 tree texist = match_mergeable_specialization (false, &elt);
8177 if (texist)
8178 set_overrun ();
8179 }
8180
8181 if (DECL_ARTIFICIAL (decl)
8182 && TREE_CODE (decl) == FUNCTION_DECL
8183 && !DECL_TEMPLATE_INFO (decl)
8184 && DECL_CONTEXT (decl) && TYPE_P (DECL_CONTEXT (decl))
8185 && TYPE_SIZE (DECL_CONTEXT (decl))
8186 && !DECL_THUNK_P (decl))
8187 /* A new implicit member function, when the class is
8188 complete. This means the importee declared it, and
8189 we must now add it to the class. Note that implicit
8190 member fns of template instantiations do not themselves
8191 look like templates. */
8192 if (!install_implicit_member (inner))
8193 set_overrun ();
8194 }
8195 else
8196 {
8197 /* DECL is the to-be-discarded decl. Its internal pointers will
8198 be to the EXISTING's structure. Frob it to point to its
8199 own other structures, so loading its definition will alter
8200 it, and not the existing decl. */
8201 dump (dumper::MERGE) && dump ("Deduping %N", existing);
8202
8203 if (inner_tag)
8204 DECL_TEMPLATE_RESULT (decl) = inner;
8205
8206 if (type)
8207 {
8208 /* Point at the to-be-discarded type & decl. */
8209 TYPE_NAME (type) = inner;
8210 TREE_TYPE (inner) = type;
8211
8212 TYPE_STUB_DECL (type) = stub_decl ? stub_decl : inner;
8213 if (stub_decl)
8214 TREE_TYPE (stub_decl) = type;
8215 }
8216
8217 if (inner_tag)
8218 /* Set the TEMPLATE_DECL's type. */
8219 TREE_TYPE (decl) = TREE_TYPE (inner);
8220
8221 if (!is_matching_decl (existing, decl))
8222 unmatched_duplicate (existing);
8223
8224 /* And our result is the existing node. */
8225 decl = existing;
8226 }
8227
8228 if (is_typedef)
8229 {
8230 /* Insert the type into the array now. */
8231 tag = insert (TREE_TYPE (decl));
8232 dump (dumper::TREE)
8233 && dump ("Cloned:%d typedef %C:%N",
8234 tag, TREE_CODE (TREE_TYPE (decl)), TREE_TYPE (decl));
8235 }
8236
8237 unused = saved_unused;
8238
8239 if (DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
8240 {
8241 unsigned flags = u ();
8242
8243 if (is_new)
8244 {
8245 bool cloned_p = flags & 1;
8246 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
8247 decl, cloned_p ? "" : "not ");
8248 if (cloned_p)
8249 build_cdtor_clones (decl, flags & 2, flags & 4,
8250 /* Update the member vec, if there is
8251 one (we're in a different cluster
8252 to the class defn). */
8253 CLASSTYPE_MEMBER_VEC (DECL_CONTEXT (decl)));
8254 }
8255 }
8256
8257 if (inner
8258 && !NAMESPACE_SCOPE_P (inner)
8259 && ((TREE_CODE (inner) == TYPE_DECL
8260 && TYPE_NAME (TREE_TYPE (inner)) == inner
8261 && !is_typedef)
8262 || TREE_CODE (inner) == FUNCTION_DECL)
8263 && u ())
8264 read_definition (decl);
8265
8266 return decl;
8267 }
8268
8269 /* DECL is an unnameable member of CTX. Return a suitable identifying
8270 index. */
8271
8272 static unsigned
8273 get_field_ident (tree ctx, tree decl)
8274 {
8275 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
8276 || !DECL_NAME (decl)
8277 || IDENTIFIER_ANON_P (DECL_NAME (decl)));
8278
8279 unsigned ix = 0;
8280 for (tree fields = TYPE_FIELDS (ctx);
8281 fields; fields = DECL_CHAIN (fields))
8282 {
8283 if (fields == decl)
8284 return ix;
8285
8286 if (DECL_CONTEXT (fields) == ctx
8287 && (TREE_CODE (fields) == USING_DECL
8288 || (TREE_CODE (fields) == FIELD_DECL
8289 && (!DECL_NAME (fields)
8290 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8291 /* Count this field. */
8292 ix++;
8293 }
8294 gcc_unreachable ();
8295 }
8296
8297 static tree
8298 lookup_field_ident (tree ctx, unsigned ix)
8299 {
8300 for (tree fields = TYPE_FIELDS (ctx);
8301 fields; fields = DECL_CHAIN (fields))
8302 if (DECL_CONTEXT (fields) == ctx
8303 && (TREE_CODE (fields) == USING_DECL
8304 || (TREE_CODE (fields) == FIELD_DECL
8305 && (!DECL_NAME (fields)
8306 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8307 if (!ix--)
8308 return fields;
8309
8310 return NULL_TREE;
8311 }
8312
8313 /* Reference DECL. REF indicates the walk kind we are performing.
8314 Return true if we should write this decl by value. */
8315
8316 bool
8317 trees_out::decl_node (tree decl, walk_kind ref)
8318 {
8319 gcc_checking_assert (DECL_P (decl) && !DECL_TEMPLATE_PARM_P (decl)
8320 && DECL_CONTEXT (decl));
8321
8322 if (ref == WK_value)
8323 {
8324 depset *dep = dep_hash->find_dependency (decl);
8325 decl_value (decl, dep);
8326 return false;
8327 }
8328
8329 switch (TREE_CODE (decl))
8330 {
8331 default:
8332 break;
8333
8334 case FUNCTION_DECL:
8335 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8336 break;
8337
8338 case RESULT_DECL:
8339 /* Unlike PARM_DECLs, RESULT_DECLs are only generated and
8340 referenced when we're inside the function itself. */
8341 return true;
8342
8343 case PARM_DECL:
8344 {
8345 if (streaming_p ())
8346 i (tt_parm);
8347 tree_node (DECL_CONTEXT (decl));
8348 if (streaming_p ())
8349 {
8350 /* That must have put this in the map. */
8351 walk_kind ref = ref_node (decl);
8352 if (ref != WK_none)
8353 // FIXME:OPTIMIZATION We can wander into bits of the
8354 // template this was instantiated from. For instance
8355 // deferred noexcept and default parms. Currently we'll
8356 // end up cloning those bits of tree. It would be nice
8357 // to reference those specific nodes. I think putting
8358 // those things in the map when we reference their
8359 // template by name. See the note in add_indirects.
8360 return true;
8361
8362 dump (dumper::TREE)
8363 && dump ("Wrote %s reference %N",
8364 TREE_CODE (decl) == PARM_DECL ? "parameter" : "result",
8365 decl);
8366 }
8367 }
8368 return false;
8369
8370 case IMPORTED_DECL:
8371 /* This describes a USING_DECL to the ME's debug machinery. It
8372 originates from the fortran FE, and has nothing to do with
8373 C++ modules. */
8374 return true;
8375
8376 case LABEL_DECL:
8377 return true;
8378
8379 case CONST_DECL:
8380 {
8381 /* If I end up cloning enum decls, implementing C++20 using
8382 E::v, this will need tweaking. */
8383 if (streaming_p ())
8384 i (tt_enum_decl);
8385 tree ctx = DECL_CONTEXT (decl);
8386 gcc_checking_assert (TREE_CODE (ctx) == ENUMERAL_TYPE);
8387 tree_node (ctx);
8388 tree_node (DECL_NAME (decl));
8389
8390 int tag = insert (decl);
8391 if (streaming_p ())
8392 dump (dumper::TREE)
8393 && dump ("Wrote enum decl:%d %C:%N", tag, TREE_CODE (decl), decl);
8394 return false;
8395 }
8396 break;
8397
8398 case USING_DECL:
8399 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
8400 break;
8401 /* FALLTHROUGH */
8402
8403 case FIELD_DECL:
8404 {
8405 if (streaming_p ())
8406 i (tt_data_member);
8407
8408 tree ctx = DECL_CONTEXT (decl);
8409 tree_node (ctx);
8410
8411 tree name = NULL_TREE;
8412
8413 if (TREE_CODE (decl) == USING_DECL)
8414 ;
8415 else
8416 {
8417 name = DECL_NAME (decl);
8418 if (name && IDENTIFIER_ANON_P (name))
8419 name = NULL_TREE;
8420 }
8421
8422 tree_node (name);
8423 if (!name && streaming_p ())
8424 {
8425 unsigned ix = get_field_ident (ctx, decl);
8426 u (ix);
8427 }
8428
8429 int tag = insert (decl);
8430 if (streaming_p ())
8431 dump (dumper::TREE)
8432 && dump ("Wrote member:%d %C:%N", tag, TREE_CODE (decl), decl);
8433 return false;
8434 }
8435 break;
8436
8437 case VAR_DECL:
8438 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8439 if (DECL_VTABLE_OR_VTT_P (decl))
8440 {
8441 /* VTT or VTABLE, they are all on the vtables list. */
8442 tree ctx = CP_DECL_CONTEXT (decl);
8443 tree vtable = CLASSTYPE_VTABLES (ctx);
8444 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
8445 if (vtable == decl)
8446 {
8447 gcc_checking_assert (DECL_VIRTUAL_P (decl));
8448 if (streaming_p ())
8449 {
8450 u (tt_vtable);
8451 u (ix);
8452 dump (dumper::TREE)
8453 && dump ("Writing vtable %N[%u]", ctx, ix);
8454 }
8455 tree_node (ctx);
8456 return false;
8457 }
8458 gcc_unreachable ();
8459 }
8460
8461 if (DECL_TINFO_P (decl))
8462 {
8463 tinfo:
8464 /* A typeinfo, tt_tinfo_typedef or tt_tinfo_var. */
8465 bool is_var = TREE_CODE (decl) == VAR_DECL;
8466 tree type = TREE_TYPE (decl);
8467 unsigned ix = get_pseudo_tinfo_index (type);
8468 if (streaming_p ())
8469 {
8470 i (is_var ? tt_tinfo_var : tt_tinfo_typedef);
8471 u (ix);
8472 }
8473
8474 if (is_var)
8475 {
8476 /* We also need the type it is for and mangled name, so
8477 the reader doesn't need to complete the type (which
8478 would break section ordering). The type it is for is
8479 stashed on the name's TREE_TYPE. */
8480 tree name = DECL_NAME (decl);
8481 tree_node (name);
8482 type = TREE_TYPE (name);
8483 tree_node (type);
8484 }
8485
8486 int tag = insert (decl);
8487 if (streaming_p ())
8488 dump (dumper::TREE)
8489 && dump ("Wrote tinfo_%s:%d %u %N", is_var ? "var" : "type",
8490 tag, ix, type);
8491
8492 if (!is_var)
8493 {
8494 tag = insert (type);
8495 if (streaming_p ())
8496 dump (dumper::TREE)
8497 && dump ("Wrote tinfo_type:%d %u %N", tag, ix, type);
8498 }
8499 return false;
8500 }
8501 break;
8502
8503 case TYPE_DECL:
8504 if (DECL_TINFO_P (decl))
8505 goto tinfo;
8506 break;
8507 }
8508
8509 if (DECL_THUNK_P (decl))
8510 {
8511 /* Thunks are similar to binfos -- write the thunked-to decl and
8512 then thunk-specific key info. */
8513 if (streaming_p ())
8514 {
8515 i (tt_thunk);
8516 i (THUNK_FIXED_OFFSET (decl));
8517 }
8518
8519 tree target = decl;
8520 while (DECL_THUNK_P (target))
8521 target = THUNK_TARGET (target);
8522 tree_node (target);
8523 tree_node (THUNK_VIRTUAL_OFFSET (decl));
8524 int tag = insert (decl);
8525 if (streaming_p ())
8526 dump (dumper::TREE)
8527 && dump ("Wrote:%d thunk %N to %N", tag, DECL_NAME (decl), target);
8528 return false;
8529 }
8530
8531 if (DECL_CLONED_FUNCTION_P (decl))
8532 {
8533 tree target = get_clone_target (decl);
8534 if (streaming_p ())
8535 i (tt_clone_ref);
8536
8537 tree_node (target);
8538 tree_node (DECL_NAME (decl));
8539 int tag = insert (decl);
8540 if (streaming_p ())
8541 dump (dumper::TREE)
8542 && dump ("Wrote:%d clone %N of %N", tag, DECL_NAME (decl), target);
8543 return false;
8544 }
8545
8546 /* Everything left should be a thing that is in the entity table.
8547 Mostly things that can be defined outside of their (original
8548 declaration) context. */
8549 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
8550 || TREE_CODE (decl) == VAR_DECL
8551 || TREE_CODE (decl) == FUNCTION_DECL
8552 || TREE_CODE (decl) == TYPE_DECL
8553 || TREE_CODE (decl) == USING_DECL
8554 || TREE_CODE (decl) == CONCEPT_DECL
8555 || TREE_CODE (decl) == NAMESPACE_DECL);
8556
8557 int use_tpl = -1;
8558 tree ti = node_template_info (decl, use_tpl);
8559 tree tpl = NULL_TREE;
8560
8561 /* If this is the TEMPLATE_DECL_RESULT of a TEMPLATE_DECL, get the
8562 TEMPLATE_DECL. Note TI_TEMPLATE is not a TEMPLATE_DECL for
8563 (some) friends, so we need to check that. */
8564 // FIXME: Should local friend template specializations be by value?
8565 // They don't get idents so we'll never know they're imported, but I
8566 // think we can only reach them from the TU that defines the
8567 // befriending class?
8568 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
8569 && DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == decl)
8570 {
8571 tpl = TI_TEMPLATE (ti);
8572 partial_template:
8573 if (streaming_p ())
8574 {
8575 i (tt_template);
8576 dump (dumper::TREE)
8577 && dump ("Writing implicit template %C:%N%S",
8578 TREE_CODE (tpl), tpl, tpl);
8579 }
8580 tree_node (tpl);
8581
8582 /* Streaming TPL caused us to visit DECL and maybe its type. */
8583 gcc_checking_assert (TREE_VISITED (decl));
8584 if (DECL_IMPLICIT_TYPEDEF_P (decl))
8585 gcc_checking_assert (TREE_VISITED (TREE_TYPE (decl)));
8586 return false;
8587 }
8588
8589 tree ctx = CP_DECL_CONTEXT (decl);
8590 depset *dep = NULL;
8591 if (streaming_p ())
8592 dep = dep_hash->find_dependency (decl);
8593 else if (TREE_CODE (ctx) != FUNCTION_DECL
8594 || TREE_CODE (decl) == TEMPLATE_DECL
8595 || (dep_hash->sneakoscope && DECL_IMPLICIT_TYPEDEF_P (decl))
8596 || (DECL_LANG_SPECIFIC (decl)
8597 && DECL_MODULE_IMPORT_P (decl)))
8598 dep = dep_hash->add_dependency (decl,
8599 TREE_CODE (decl) == NAMESPACE_DECL
8600 && !DECL_NAMESPACE_ALIAS (decl)
8601 ? depset::EK_NAMESPACE : depset::EK_DECL);
8602
8603 if (!dep)
8604 {
8605 /* Some internal entity of context. Do by value. */
8606 decl_value (decl, NULL);
8607 return false;
8608 }
8609
8610 if (dep->get_entity_kind () == depset::EK_REDIRECT)
8611 {
8612 /* The DECL_TEMPLATE_RESULT of a partial specialization.
8613 Write the partial specialization's template. */
8614 depset *redirect = dep->deps[0];
8615 gcc_checking_assert (redirect->get_entity_kind () == depset::EK_PARTIAL);
8616 tpl = redirect->get_entity ();
8617 goto partial_template;
8618 }
8619
8620 if (streaming_p ())
8621 {
8622 /* Locate the entity. */
8623 unsigned index = dep->cluster;
8624 unsigned import = 0;
8625
8626 if (dep->is_import ())
8627 import = dep->section;
8628 else if (CHECKING_P)
8629 /* It should be what we put there. */
8630 gcc_checking_assert (index == ~import_entity_index (decl));
8631
8632 #if CHECKING_P
8633 if (importedness)
8634 gcc_assert (!import == (importedness < 0));
8635 #endif
8636 i (tt_entity);
8637 u (import);
8638 u (index);
8639 }
8640
8641 int tag = insert (decl);
8642 if (streaming_p () && dump (dumper::TREE))
8643 {
8644 char const *kind = "import";
8645 module_state *from = (*modules)[0];
8646 if (dep->is_import ())
8647 /* Rediscover the unremapped index. */
8648 from = import_entity_module (import_entity_index (decl));
8649 else
8650 {
8651 tree o = get_originating_module_decl (decl);
8652 kind = (DECL_LANG_SPECIFIC (o) && DECL_MODULE_PURVIEW_P (o)
8653 ? "purview" : "GMF");
8654 }
8655 dump ("Wrote %s:%d %C:%N@%M", kind,
8656 tag, TREE_CODE (decl), decl, from);
8657 }
8658
8659 add_indirects (decl);
8660
8661 return false;
8662 }
8663
8664 void
8665 trees_out::type_node (tree type)
8666 {
8667 gcc_assert (TYPE_P (type));
8668
8669 tree root = (TYPE_NAME (type)
8670 ? TREE_TYPE (TYPE_NAME (type)) : TYPE_MAIN_VARIANT (type));
8671
8672 if (type != root)
8673 {
8674 if (streaming_p ())
8675 i (tt_variant_type);
8676 tree_node (root);
8677
8678 int flags = -1;
8679
8680 if (TREE_CODE (type) == FUNCTION_TYPE
8681 || TREE_CODE (type) == METHOD_TYPE)
8682 {
8683 int quals = type_memfn_quals (type);
8684 int rquals = type_memfn_rqual (type);
8685 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8686 bool late = TYPE_HAS_LATE_RETURN_TYPE (type);
8687
8688 if (raises != TYPE_RAISES_EXCEPTIONS (root)
8689 || rquals != type_memfn_rqual (root)
8690 || quals != type_memfn_quals (root)
8691 || late != TYPE_HAS_LATE_RETURN_TYPE (root))
8692 flags = rquals | (int (late) << 2) | (quals << 3);
8693 }
8694 else
8695 {
8696 if (TYPE_USER_ALIGN (type))
8697 flags = exact_log2 (TYPE_ALIGN (type));
8698 }
8699
8700 if (streaming_p ())
8701 i (flags);
8702
8703 if (flags < 0)
8704 ;
8705 else if (TREE_CODE (type) == FUNCTION_TYPE
8706 || TREE_CODE (type) == METHOD_TYPE)
8707 {
8708 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8709 if (raises == TYPE_RAISES_EXCEPTIONS (root))
8710 raises = error_mark_node;
8711 tree_node (raises);
8712 }
8713
8714 tree_node (TYPE_ATTRIBUTES (type));
8715
8716 if (streaming_p ())
8717 {
8718 /* Qualifiers. */
8719 int rquals = cp_type_quals (root);
8720 int quals = cp_type_quals (type);
8721 if (quals == rquals)
8722 quals = -1;
8723 i (quals);
8724 }
8725
8726 if (ref_node (type) != WK_none)
8727 {
8728 int tag = insert (type);
8729 if (streaming_p ())
8730 {
8731 i (0);
8732 dump (dumper::TREE)
8733 && dump ("Wrote:%d variant type %C", tag, TREE_CODE (type));
8734 }
8735 }
8736 return;
8737 }
8738
8739 if (tree name = TYPE_NAME (type))
8740 if ((TREE_CODE (name) == TYPE_DECL && DECL_ORIGINAL_TYPE (name))
8741 || DECL_TEMPLATE_PARM_P (name)
8742 || TREE_CODE (type) == RECORD_TYPE
8743 || TREE_CODE (type) == UNION_TYPE
8744 || TREE_CODE (type) == ENUMERAL_TYPE)
8745 {
8746 /* We can meet template parms that we didn't meet in the
8747 tpl_parms walk, because we're referring to a derived type
8748 that was previously constructed from equivalent template
8749 parms. */
8750 if (streaming_p ())
8751 {
8752 i (tt_typedef_type);
8753 dump (dumper::TREE)
8754 && dump ("Writing %stypedef %C:%N",
8755 DECL_IMPLICIT_TYPEDEF_P (name) ? "implicit " : "",
8756 TREE_CODE (name), name);
8757 }
8758 tree_node (name);
8759 if (streaming_p ())
8760 dump (dumper::TREE) && dump ("Wrote typedef %C:%N%S",
8761 TREE_CODE (name), name, name);
8762 gcc_checking_assert (TREE_VISITED (type));
8763 return;
8764 }
8765
8766 if (TYPE_PTRMEMFUNC_P (type))
8767 {
8768 /* This is a distinct type node, masquerading as a structure. */
8769 tree fn_type = TYPE_PTRMEMFUNC_FN_TYPE (type);
8770 if (streaming_p ())
8771 i (tt_ptrmem_type);
8772 tree_node (fn_type);
8773 int tag = insert (type);
8774 if (streaming_p ())
8775 dump (dumper::TREE) && dump ("Written:%d ptrmem type", tag);
8776 return;
8777 }
8778
8779 if (streaming_p ())
8780 {
8781 u (tt_derived_type);
8782 u (TREE_CODE (type));
8783 }
8784
8785 tree_node (TREE_TYPE (type));
8786 switch (TREE_CODE (type))
8787 {
8788 default:
8789 /* We should never meet a type here that is indescribable in
8790 terms of other types. */
8791 gcc_unreachable ();
8792
8793 case ARRAY_TYPE:
8794 tree_node (TYPE_DOMAIN (type));
8795 if (streaming_p ())
8796 /* Dependent arrays are constructed with TYPE_DEPENENT_P
8797 already set. */
8798 u (TYPE_DEPENDENT_P (type));
8799 break;
8800
8801 case COMPLEX_TYPE:
8802 /* No additional data. */
8803 break;
8804
8805 case BOOLEAN_TYPE:
8806 /* A non-standard boolean type. */
8807 if (streaming_p ())
8808 u (TYPE_PRECISION (type));
8809 break;
8810
8811 case INTEGER_TYPE:
8812 if (TREE_TYPE (type))
8813 {
8814 /* A range type (representing an array domain). */
8815 tree_node (TYPE_MIN_VALUE (type));
8816 tree_node (TYPE_MAX_VALUE (type));
8817 }
8818 else
8819 {
8820 /* A new integral type (representing a bitfield). */
8821 if (streaming_p ())
8822 {
8823 unsigned prec = TYPE_PRECISION (type);
8824 bool unsigned_p = TYPE_UNSIGNED (type);
8825
8826 u ((prec << 1) | unsigned_p);
8827 }
8828 }
8829 break;
8830
8831 case METHOD_TYPE:
8832 case FUNCTION_TYPE:
8833 {
8834 gcc_checking_assert (type_memfn_rqual (type) == REF_QUAL_NONE);
8835
8836 tree arg_types = TYPE_ARG_TYPES (type);
8837 if (TREE_CODE (type) == METHOD_TYPE)
8838 {
8839 tree_node (TREE_TYPE (TREE_VALUE (arg_types)));
8840 arg_types = TREE_CHAIN (arg_types);
8841 }
8842 tree_node (arg_types);
8843 }
8844 break;
8845
8846 case OFFSET_TYPE:
8847 tree_node (TYPE_OFFSET_BASETYPE (type));
8848 break;
8849
8850 case POINTER_TYPE:
8851 /* No additional data. */
8852 break;
8853
8854 case REFERENCE_TYPE:
8855 if (streaming_p ())
8856 u (TYPE_REF_IS_RVALUE (type));
8857 break;
8858
8859 case DECLTYPE_TYPE:
8860 case TYPEOF_TYPE:
8861 case UNDERLYING_TYPE:
8862 tree_node (TYPE_VALUES_RAW (type));
8863 if (TREE_CODE (type) == DECLTYPE_TYPE)
8864 /* We stash a whole bunch of things into decltype's
8865 flags. */
8866 if (streaming_p ())
8867 tree_node_bools (type);
8868 break;
8869
8870 case TYPE_ARGUMENT_PACK:
8871 /* No additional data. */
8872 break;
8873
8874 case TYPE_PACK_EXPANSION:
8875 if (streaming_p ())
8876 u (PACK_EXPANSION_LOCAL_P (type));
8877 tree_node (PACK_EXPANSION_PARAMETER_PACKS (type));
8878 break;
8879
8880 case TYPENAME_TYPE:
8881 {
8882 tree_node (TYPE_CONTEXT (type));
8883 tree_node (DECL_NAME (TYPE_NAME (type)));
8884 tree_node (TYPENAME_TYPE_FULLNAME (type));
8885 if (streaming_p ())
8886 {
8887 enum tag_types tag_type = none_type;
8888 if (TYPENAME_IS_ENUM_P (type))
8889 tag_type = enum_type;
8890 else if (TYPENAME_IS_CLASS_P (type))
8891 tag_type = class_type;
8892 u (int (tag_type));
8893 }
8894 }
8895 break;
8896
8897 case UNBOUND_CLASS_TEMPLATE:
8898 {
8899 tree decl = TYPE_NAME (type);
8900 tree_node (DECL_CONTEXT (decl));
8901 tree_node (DECL_NAME (decl));
8902 tree_node (DECL_TEMPLATE_PARMS (decl));
8903 }
8904 break;
8905
8906 case VECTOR_TYPE:
8907 if (streaming_p ())
8908 {
8909 poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (type);
8910 /* to_constant asserts that only coeff[0] is of interest. */
8911 wu (static_cast<unsigned HOST_WIDE_INT> (nunits.to_constant ()));
8912 }
8913 break;
8914 }
8915
8916 /* We may have met the type during emitting the above. */
8917 if (ref_node (type) != WK_none)
8918 {
8919 int tag = insert (type);
8920 if (streaming_p ())
8921 {
8922 i (0);
8923 dump (dumper::TREE)
8924 && dump ("Wrote:%d derived type %C", tag, TREE_CODE (type));
8925 }
8926 }
8927
8928 return;
8929 }
8930
8931 /* T is (mostly*) a non-mergeable node that must be written by value.
8932 The mergeable case is a BINFO, which are as-if DECLSs. */
8933
8934 void
8935 trees_out::tree_value (tree t)
8936 {
8937 /* We should never be writing a type by value. tree_type should
8938 have streamed it, or we're going via its TYPE_DECL. */
8939 gcc_checking_assert (!TYPE_P (t));
8940
8941 if (DECL_P (t))
8942 /* No template, type, var or function, except anonymous
8943 non-context vars. */
8944 gcc_checking_assert ((TREE_CODE (t) != TEMPLATE_DECL
8945 && TREE_CODE (t) != TYPE_DECL
8946 && (TREE_CODE (t) != VAR_DECL
8947 || (!DECL_NAME (t) && !DECL_CONTEXT (t)))
8948 && TREE_CODE (t) != FUNCTION_DECL));
8949
8950 if (streaming_p ())
8951 {
8952 /* A new node -> tt_node. */
8953 tree_val_count++;
8954 i (tt_node);
8955 start (t);
8956 tree_node_bools (t);
8957 }
8958
8959 if (TREE_CODE (t) == TREE_BINFO)
8960 /* Binfos are decl-like and need merging information. */
8961 binfo_mergeable (t);
8962
8963 int tag = insert (t, WK_value);
8964 if (streaming_p ())
8965 dump (dumper::TREE)
8966 && dump ("Writing tree:%d %C:%N", tag, TREE_CODE (t), t);
8967
8968 tree_node_vals (t);
8969
8970 if (streaming_p ())
8971 dump (dumper::TREE) && dump ("Written tree:%d %C:%N", tag, TREE_CODE (t), t);
8972 }
8973
8974 tree
8975 trees_in::tree_value ()
8976 {
8977 tree t = start ();
8978 if (!t || !tree_node_bools (t))
8979 return NULL_TREE;
8980
8981 tree existing = t;
8982 if (TREE_CODE (t) == TREE_BINFO)
8983 {
8984 tree type;
8985 unsigned ix = binfo_mergeable (&type);
8986 if (TYPE_BINFO (type))
8987 {
8988 /* We already have a definition, this must be a duplicate. */
8989 dump (dumper::MERGE)
8990 && dump ("Deduping binfo %N[%u]", type, ix);
8991 existing = TYPE_BINFO (type);
8992 while (existing && ix)
8993 existing = TREE_CHAIN (existing);
8994 if (existing)
8995 register_duplicate (t, existing);
8996 else
8997 /* Error, mismatch -- diagnose in read_class_def's
8998 checking. */
8999 existing = t;
9000 }
9001 }
9002
9003 /* Insert into map. */
9004 int tag = insert (existing);
9005 dump (dumper::TREE)
9006 && dump ("Reading tree:%d %C", tag, TREE_CODE (t));
9007
9008 if (!tree_node_vals (t))
9009 {
9010 back_refs[~tag] = NULL_TREE;
9011 set_overrun ();
9012 /* Bail. */
9013 return NULL_TREE;
9014 }
9015
9016 dump (dumper::TREE) && dump ("Read tree:%d %C:%N", tag, TREE_CODE (t), t);
9017
9018 if (TREE_CODE (existing) == INTEGER_CST && !TREE_OVERFLOW (existing))
9019 {
9020 existing = cache_integer_cst (t, true);
9021 back_refs[~tag] = existing;
9022 }
9023
9024 return existing;
9025 }
9026
9027 /* Stream out tree node T. We automatically create local back
9028 references, which is essentially a single pass lisp
9029 self-referential structure pretty-printer. */
9030
9031 void
9032 trees_out::tree_node (tree t)
9033 {
9034 dump.indent ();
9035 walk_kind ref = ref_node (t);
9036 if (ref == WK_none)
9037 goto done;
9038
9039 if (ref != WK_normal)
9040 goto skip_normal;
9041
9042 if (TREE_CODE (t) == IDENTIFIER_NODE)
9043 {
9044 /* An identifier node -> tt_id, tt_conv_id, tt_anon_id, tt_lambda_id. */
9045 int code = tt_id;
9046 if (IDENTIFIER_ANON_P (t))
9047 code = IDENTIFIER_LAMBDA_P (t) ? tt_lambda_id : tt_anon_id;
9048 else if (IDENTIFIER_CONV_OP_P (t))
9049 code = tt_conv_id;
9050
9051 if (streaming_p ())
9052 i (code);
9053
9054 if (code == tt_conv_id)
9055 {
9056 tree type = TREE_TYPE (t);
9057 gcc_checking_assert (type || t == conv_op_identifier);
9058 tree_node (type);
9059 }
9060 else if (code == tt_id && streaming_p ())
9061 str (IDENTIFIER_POINTER (t), IDENTIFIER_LENGTH (t));
9062
9063 int tag = insert (t);
9064 if (streaming_p ())
9065 {
9066 /* We know the ordering of the 4 id tags. */
9067 static const char *const kinds[] =
9068 {"", "conv_op ", "anon ", "lambda "};
9069 dump (dumper::TREE)
9070 && dump ("Written:%d %sidentifier:%N", tag,
9071 kinds[code - tt_id],
9072 code == tt_conv_id ? TREE_TYPE (t) : t);
9073 }
9074 goto done;
9075 }
9076
9077 if (TREE_CODE (t) == TREE_BINFO)
9078 {
9079 /* A BINFO -> tt_binfo.
9080 We must do this by reference. We stream the binfo tree
9081 itself when streaming its owning RECORD_TYPE. That we got
9082 here means the dominating type is not in this SCC. */
9083 if (streaming_p ())
9084 i (tt_binfo);
9085 binfo_mergeable (t);
9086 gcc_checking_assert (!TREE_VISITED (t));
9087 int tag = insert (t);
9088 if (streaming_p ())
9089 dump (dumper::TREE) && dump ("Inserting binfo:%d %N", tag, t);
9090 goto done;
9091 }
9092
9093 if (TREE_CODE (t) == INTEGER_CST
9094 && !TREE_OVERFLOW (t)
9095 && TREE_CODE (TREE_TYPE (t)) == ENUMERAL_TYPE)
9096 {
9097 /* An integral constant of enumeral type. See if it matches one
9098 of the enumeration values. */
9099 for (tree values = TYPE_VALUES (TREE_TYPE (t));
9100 values; values = TREE_CHAIN (values))
9101 {
9102 tree decl = TREE_VALUE (values);
9103 if (tree_int_cst_equal (DECL_INITIAL (decl), t))
9104 {
9105 if (streaming_p ())
9106 u (tt_enum_value);
9107 tree_node (decl);
9108 dump (dumper::TREE) && dump ("Written enum value %N", decl);
9109 goto done;
9110 }
9111 }
9112 /* It didn't match. We'll write it a an explicit INTEGER_CST
9113 node. */
9114 }
9115
9116 if (TYPE_P (t))
9117 {
9118 type_node (t);
9119 goto done;
9120 }
9121
9122 if (DECL_P (t))
9123 {
9124 if (DECL_TEMPLATE_PARM_P (t))
9125 {
9126 tpl_parm_value (t);
9127 goto done;
9128 }
9129
9130 if (!DECL_CONTEXT (t))
9131 {
9132 /* There are a few cases of decls with no context. We'll write
9133 these by value, but first assert they are cases we expect. */
9134 gcc_checking_assert (ref == WK_normal);
9135 switch (TREE_CODE (t))
9136 {
9137 default: gcc_unreachable ();
9138
9139 case LABEL_DECL:
9140 /* CASE_LABEL_EXPRs contain uncontexted LABEL_DECLs. */
9141 gcc_checking_assert (!DECL_NAME (t));
9142 break;
9143
9144 case VAR_DECL:
9145 /* AGGR_INIT_EXPRs cons up anonymous uncontexted VAR_DECLs. */
9146 gcc_checking_assert (!DECL_NAME (t)
9147 && DECL_ARTIFICIAL (t));
9148 break;
9149
9150 case PARM_DECL:
9151 /* REQUIRES_EXPRs have a tree list of uncontexted
9152 PARM_DECLS. It'd be nice if they had a
9153 distinguishing flag to double check. */
9154 break;
9155 }
9156 goto by_value;
9157 }
9158 }
9159
9160 skip_normal:
9161 if (DECL_P (t) && !decl_node (t, ref))
9162 goto done;
9163
9164 /* Otherwise by value */
9165 by_value:
9166 tree_value (t);
9167
9168 done:
9169 /* And, breath out. */
9170 dump.outdent ();
9171 }
9172
9173 /* Stream in a tree node. */
9174
9175 tree
9176 trees_in::tree_node (bool is_use)
9177 {
9178 if (get_overrun ())
9179 return NULL_TREE;
9180
9181 dump.indent ();
9182 int tag = i ();
9183 tree res = NULL_TREE;
9184 switch (tag)
9185 {
9186 default:
9187 /* backref, pull it out of the map. */
9188 res = back_ref (tag);
9189 break;
9190
9191 case tt_null:
9192 /* NULL_TREE. */
9193 break;
9194
9195 case tt_fixed:
9196 /* A fixed ref, find it in the fixed_ref array. */
9197 {
9198 unsigned fix = u ();
9199 if (fix < (*fixed_trees).length ())
9200 {
9201 res = (*fixed_trees)[fix];
9202 dump (dumper::TREE) && dump ("Read fixed:%u %C:%N%S", fix,
9203 TREE_CODE (res), res, res);
9204 }
9205
9206 if (!res)
9207 set_overrun ();
9208 }
9209 break;
9210
9211 case tt_parm:
9212 {
9213 tree fn = tree_node ();
9214 if (fn && TREE_CODE (fn) == FUNCTION_DECL)
9215 res = tree_node ();
9216 if (res)
9217 dump (dumper::TREE)
9218 && dump ("Read %s reference %N",
9219 TREE_CODE (res) == PARM_DECL ? "parameter" : "result",
9220 res);
9221 }
9222 break;
9223
9224 case tt_node:
9225 /* A new node. Stream it in. */
9226 res = tree_value ();
9227 break;
9228
9229 case tt_decl:
9230 /* A new decl. Stream it in. */
9231 res = decl_value ();
9232 break;
9233
9234 case tt_tpl_parm:
9235 /* A template parameter. Stream it in. */
9236 res = tpl_parm_value ();
9237 break;
9238
9239 case tt_id:
9240 /* An identifier node. */
9241 {
9242 size_t l;
9243 const char *chars = str (&l);
9244 res = get_identifier_with_length (chars, l);
9245 int tag = insert (res);
9246 dump (dumper::TREE)
9247 && dump ("Read identifier:%d %N", tag, res);
9248 }
9249 break;
9250
9251 case tt_conv_id:
9252 /* A conversion operator. Get the type and recreate the
9253 identifier. */
9254 {
9255 tree type = tree_node ();
9256 if (!get_overrun ())
9257 {
9258 res = type ? make_conv_op_name (type) : conv_op_identifier;
9259 int tag = insert (res);
9260 dump (dumper::TREE)
9261 && dump ("Created conv_op:%d %S for %N", tag, res, type);
9262 }
9263 }
9264 break;
9265
9266 case tt_anon_id:
9267 case tt_lambda_id:
9268 /* An anonymous or lambda id. */
9269 {
9270 res = make_anon_name ();
9271 if (tag == tt_lambda_id)
9272 IDENTIFIER_LAMBDA_P (res) = true;
9273 int tag = insert (res);
9274 dump (dumper::TREE)
9275 && dump ("Read %s identifier:%d %N",
9276 IDENTIFIER_LAMBDA_P (res) ? "lambda" : "anon", tag, res);
9277 }
9278 break;
9279
9280 case tt_typedef_type:
9281 res = tree_node ();
9282 if (res)
9283 {
9284 dump (dumper::TREE)
9285 && dump ("Read %stypedef %C:%N",
9286 DECL_IMPLICIT_TYPEDEF_P (res) ? "implicit " : "",
9287 TREE_CODE (res), res);
9288 res = TREE_TYPE (res);
9289 }
9290 break;
9291
9292 case tt_derived_type:
9293 /* A type derived from some other type. */
9294 {
9295 enum tree_code code = tree_code (u ());
9296 res = tree_node ();
9297
9298 switch (code)
9299 {
9300 default:
9301 set_overrun ();
9302 break;
9303
9304 case ARRAY_TYPE:
9305 {
9306 tree domain = tree_node ();
9307 int dep = u ();
9308 if (!get_overrun ())
9309 res = build_cplus_array_type (res, domain, dep);
9310 }
9311 break;
9312
9313 case COMPLEX_TYPE:
9314 if (!get_overrun ())
9315 res = build_complex_type (res);
9316 break;
9317
9318 case BOOLEAN_TYPE:
9319 {
9320 unsigned precision = u ();
9321 if (!get_overrun ())
9322 res = build_nonstandard_boolean_type (precision);
9323 }
9324 break;
9325
9326 case INTEGER_TYPE:
9327 if (res)
9328 {
9329 /* A range type (representing an array domain). */
9330 tree min = tree_node ();
9331 tree max = tree_node ();
9332
9333 if (!get_overrun ())
9334 res = build_range_type (res, min, max);
9335 }
9336 else
9337 {
9338 /* A new integral type (representing a bitfield). */
9339 unsigned enc = u ();
9340 if (!get_overrun ())
9341 res = build_nonstandard_integer_type (enc >> 1, enc & 1);
9342 }
9343 break;
9344
9345 case FUNCTION_TYPE:
9346 case METHOD_TYPE:
9347 {
9348 tree klass = code == METHOD_TYPE ? tree_node () : NULL_TREE;
9349 tree args = tree_node ();
9350 if (!get_overrun ())
9351 {
9352 if (klass)
9353 res = build_method_type_directly (klass, res, args);
9354 else
9355 res = build_function_type (res, args);
9356 }
9357 }
9358 break;
9359
9360 case OFFSET_TYPE:
9361 {
9362 tree base = tree_node ();
9363 if (!get_overrun ())
9364 res = build_offset_type (base, res);
9365 }
9366 break;
9367
9368 case POINTER_TYPE:
9369 if (!get_overrun ())
9370 res = build_pointer_type (res);
9371 break;
9372
9373 case REFERENCE_TYPE:
9374 {
9375 bool rval = bool (u ());
9376 if (!get_overrun ())
9377 res = cp_build_reference_type (res, rval);
9378 }
9379 break;
9380
9381 case DECLTYPE_TYPE:
9382 case TYPEOF_TYPE:
9383 case UNDERLYING_TYPE:
9384 {
9385 tree expr = tree_node ();
9386 if (!get_overrun ())
9387 {
9388 res = cxx_make_type (code);
9389 TYPE_VALUES_RAW (res) = expr;
9390 if (code == DECLTYPE_TYPE)
9391 tree_node_bools (res);
9392 SET_TYPE_STRUCTURAL_EQUALITY (res);
9393 }
9394 }
9395 break;
9396
9397 case TYPE_ARGUMENT_PACK:
9398 if (!get_overrun ())
9399 {
9400 tree pack = cxx_make_type (TYPE_ARGUMENT_PACK);
9401 SET_ARGUMENT_PACK_ARGS (pack, res);
9402 res = pack;
9403 }
9404 break;
9405
9406 case TYPE_PACK_EXPANSION:
9407 {
9408 bool local = u ();
9409 tree param_packs = tree_node ();
9410 if (!get_overrun ())
9411 {
9412 tree expn = cxx_make_type (TYPE_PACK_EXPANSION);
9413 SET_TYPE_STRUCTURAL_EQUALITY (expn);
9414 SET_PACK_EXPANSION_PATTERN (expn, res);
9415 PACK_EXPANSION_PARAMETER_PACKS (expn) = param_packs;
9416 PACK_EXPANSION_LOCAL_P (expn) = local;
9417 res = expn;
9418 }
9419 }
9420 break;
9421
9422 case TYPENAME_TYPE:
9423 {
9424 tree ctx = tree_node ();
9425 tree name = tree_node ();
9426 tree fullname = tree_node ();
9427 enum tag_types tag_type = tag_types (u ());
9428
9429 if (!get_overrun ())
9430 res = build_typename_type (ctx, name, fullname, tag_type);
9431 }
9432 break;
9433
9434 case UNBOUND_CLASS_TEMPLATE:
9435 {
9436 tree ctx = tree_node ();
9437 tree name = tree_node ();
9438 tree parms = tree_node ();
9439
9440 if (!get_overrun ())
9441 res = make_unbound_class_template_raw (ctx, name, parms);
9442 }
9443 break;
9444
9445 case VECTOR_TYPE:
9446 {
9447 unsigned HOST_WIDE_INT nunits = wu ();
9448 if (!get_overrun ())
9449 res = build_vector_type (res, static_cast<poly_int64> (nunits));
9450 }
9451 break;
9452 }
9453
9454 int tag = i ();
9455 if (!tag)
9456 {
9457 tag = insert (res);
9458 if (res)
9459 dump (dumper::TREE)
9460 && dump ("Created:%d derived type %C", tag, code);
9461 }
9462 else
9463 res = back_ref (tag);
9464 }
9465 break;
9466
9467 case tt_variant_type:
9468 /* Variant of some type. */
9469 {
9470 res = tree_node ();
9471 int flags = i ();
9472 if (get_overrun ())
9473 ;
9474 else if (flags < 0)
9475 /* No change. */;
9476 else if (TREE_CODE (res) == FUNCTION_TYPE
9477 || TREE_CODE (res) == METHOD_TYPE)
9478 {
9479 cp_ref_qualifier rqual = cp_ref_qualifier (flags & 3);
9480 bool late = (flags >> 2) & 1;
9481 cp_cv_quals quals = cp_cv_quals (flags >> 3);
9482
9483 tree raises = tree_node ();
9484 if (raises == error_mark_node)
9485 raises = TYPE_RAISES_EXCEPTIONS (res);
9486
9487 res = build_cp_fntype_variant (res, rqual, raises, late);
9488 if (TREE_CODE (res) == FUNCTION_TYPE)
9489 res = apply_memfn_quals (res, quals, rqual);
9490 }
9491 else
9492 {
9493 res = build_aligned_type (res, 1u << flags);
9494 TYPE_USER_ALIGN (res) = true;
9495 }
9496
9497 if (tree attribs = tree_node ())
9498 res = cp_build_type_attribute_variant (res, attribs);
9499
9500 int quals = i ();
9501 if (quals >= 0 && !get_overrun ())
9502 res = cp_build_qualified_type (res, quals);
9503
9504 int tag = i ();
9505 if (!tag)
9506 {
9507 tag = insert (res);
9508 if (res)
9509 dump (dumper::TREE)
9510 && dump ("Created:%d variant type %C", tag, TREE_CODE (res));
9511 }
9512 else
9513 res = back_ref (tag);
9514 }
9515 break;
9516
9517 case tt_tinfo_var:
9518 case tt_tinfo_typedef:
9519 /* A tinfo var or typedef. */
9520 {
9521 bool is_var = tag == tt_tinfo_var;
9522 unsigned ix = u ();
9523 tree type = NULL_TREE;
9524
9525 if (is_var)
9526 {
9527 tree name = tree_node ();
9528 type = tree_node ();
9529
9530 if (!get_overrun ())
9531 res = get_tinfo_decl_direct (type, name, int (ix));
9532 }
9533 else
9534 {
9535 if (!get_overrun ())
9536 {
9537 type = get_pseudo_tinfo_type (ix);
9538 res = TYPE_NAME (type);
9539 }
9540 }
9541 if (res)
9542 {
9543 int tag = insert (res);
9544 dump (dumper::TREE)
9545 && dump ("Created tinfo_%s:%d %S:%u for %N",
9546 is_var ? "var" : "decl", tag, res, ix, type);
9547 if (!is_var)
9548 {
9549 tag = insert (type);
9550 dump (dumper::TREE)
9551 && dump ("Created tinfo_type:%d %u %N", tag, ix, type);
9552 }
9553 }
9554 }
9555 break;
9556
9557 case tt_ptrmem_type:
9558 /* A pointer to member function. */
9559 {
9560 tree type = tree_node ();
9561 if (type && TREE_CODE (type) == POINTER_TYPE
9562 && TREE_CODE (TREE_TYPE (type)) == METHOD_TYPE)
9563 {
9564 res = build_ptrmemfunc_type (type);
9565 int tag = insert (res);
9566 dump (dumper::TREE) && dump ("Created:%d ptrmem type", tag);
9567 }
9568 else
9569 set_overrun ();
9570 }
9571 break;
9572
9573 case tt_enum_value:
9574 /* An enum const value. */
9575 {
9576 if (tree decl = tree_node ())
9577 {
9578 dump (dumper::TREE) && dump ("Read enum value %N", decl);
9579 res = DECL_INITIAL (decl);
9580 }
9581
9582 if (!res)
9583 set_overrun ();
9584 }
9585 break;
9586
9587 case tt_enum_decl:
9588 /* An enum decl. */
9589 {
9590 tree ctx = tree_node ();
9591 tree name = tree_node ();
9592
9593 if (!get_overrun ()
9594 && TREE_CODE (ctx) == ENUMERAL_TYPE)
9595 res = find_enum_member (ctx, name);
9596
9597 if (!res)
9598 set_overrun ();
9599 else
9600 {
9601 int tag = insert (res);
9602 dump (dumper::TREE)
9603 && dump ("Read enum decl:%d %C:%N", tag, TREE_CODE (res), res);
9604 }
9605 }
9606 break;
9607
9608 case tt_data_member:
9609 /* A data member. */
9610 {
9611 tree ctx = tree_node ();
9612 tree name = tree_node ();
9613
9614 if (!get_overrun ()
9615 && RECORD_OR_UNION_TYPE_P (ctx))
9616 {
9617 if (name)
9618 res = lookup_class_binding (ctx, name);
9619 else
9620 res = lookup_field_ident (ctx, u ());
9621
9622 if (!res
9623 || TREE_CODE (res) != FIELD_DECL
9624 || DECL_CONTEXT (res) != ctx)
9625 res = NULL_TREE;
9626 }
9627
9628 if (!res)
9629 set_overrun ();
9630 else
9631 {
9632 int tag = insert (res);
9633 dump (dumper::TREE)
9634 && dump ("Read member:%d %C:%N", tag, TREE_CODE (res), res);
9635 }
9636 }
9637 break;
9638
9639 case tt_binfo:
9640 /* A BINFO. Walk the tree of the dominating type. */
9641 {
9642 tree type;
9643 unsigned ix = binfo_mergeable (&type);
9644 if (type)
9645 {
9646 res = TYPE_BINFO (type);
9647 for (; ix && res; res = TREE_CHAIN (res))
9648 ix--;
9649 if (!res)
9650 set_overrun ();
9651 }
9652
9653 if (get_overrun ())
9654 break;
9655
9656 /* Insert binfo into backreferences. */
9657 tag = insert (res);
9658 dump (dumper::TREE) && dump ("Read binfo:%d %N", tag, res);
9659 }
9660 break;
9661
9662 case tt_vtable:
9663 {
9664 unsigned ix = u ();
9665 tree ctx = tree_node ();
9666 dump (dumper::TREE) && dump ("Reading vtable %N[%u]", ctx, ix);
9667 if (TREE_CODE (ctx) == RECORD_TYPE && TYPE_LANG_SPECIFIC (ctx))
9668 for (res = CLASSTYPE_VTABLES (ctx); res; res = DECL_CHAIN (res))
9669 if (!ix--)
9670 break;
9671 if (!res)
9672 set_overrun ();
9673 }
9674 break;
9675
9676 case tt_thunk:
9677 {
9678 int fixed = i ();
9679 tree target = tree_node ();
9680 tree virt = tree_node ();
9681
9682 for (tree thunk = DECL_THUNKS (target);
9683 thunk; thunk = DECL_CHAIN (thunk))
9684 if (THUNK_FIXED_OFFSET (thunk) == fixed
9685 && !THUNK_VIRTUAL_OFFSET (thunk) == !virt
9686 && (!virt
9687 || tree_int_cst_equal (virt, THUNK_VIRTUAL_OFFSET (thunk))))
9688 {
9689 res = thunk;
9690 break;
9691 }
9692
9693 int tag = insert (res);
9694 if (res)
9695 dump (dumper::TREE)
9696 && dump ("Read:%d thunk %N to %N", tag, DECL_NAME (res), target);
9697 else
9698 set_overrun ();
9699 }
9700 break;
9701
9702 case tt_clone_ref:
9703 {
9704 tree target = tree_node ();
9705 tree name = tree_node ();
9706
9707 if (DECL_P (target) && DECL_MAYBE_IN_CHARGE_CDTOR_P (target))
9708 {
9709 tree clone;
9710 FOR_EVERY_CLONE (clone, target)
9711 if (DECL_NAME (clone) == name)
9712 {
9713 res = clone;
9714 break;
9715 }
9716 }
9717
9718 if (!res)
9719 set_overrun ();
9720 int tag = insert (res);
9721 if (res)
9722 dump (dumper::TREE)
9723 && dump ("Read:%d clone %N of %N", tag, DECL_NAME (res), target);
9724 else
9725 set_overrun ();
9726 }
9727 break;
9728
9729 case tt_entity:
9730 /* Index into the entity table. Perhaps not loaded yet! */
9731 {
9732 unsigned origin = state->slurp->remap_module (u ());
9733 unsigned ident = u ();
9734 module_state *from = (*modules)[origin];
9735
9736 if (!origin || ident >= from->entity_num)
9737 set_overrun ();
9738 if (!get_overrun ())
9739 {
9740 binding_slot *slot = &(*entity_ary)[from->entity_lwm + ident];
9741 if (slot->is_lazy ())
9742 if (!from->lazy_load (ident, slot))
9743 set_overrun ();
9744 res = *slot;
9745 }
9746
9747 if (res)
9748 {
9749 const char *kind = (origin != state->mod ? "Imported" : "Named");
9750 int tag = insert (res);
9751 dump (dumper::TREE)
9752 && dump ("%s:%d %C:%N@%M", kind, tag, TREE_CODE (res),
9753 res, (*modules)[origin]);
9754
9755 if (!add_indirects (res))
9756 {
9757 set_overrun ();
9758 res = NULL_TREE;
9759 }
9760 }
9761 }
9762 break;
9763
9764 case tt_template:
9765 /* A template. */
9766 if (tree tpl = tree_node ())
9767 {
9768 res = DECL_TEMPLATE_RESULT (tpl);
9769 dump (dumper::TREE)
9770 && dump ("Read template %C:%N", TREE_CODE (res), res);
9771 }
9772 break;
9773 }
9774
9775 if (is_use && !unused && res && DECL_P (res) && !TREE_USED (res))
9776 {
9777 /* Mark decl used as mark_used does -- we cannot call
9778 mark_used in the middle of streaming, we only need a subset
9779 of its functionality. */
9780 TREE_USED (res) = true;
9781
9782 /* And for structured bindings also the underlying decl. */
9783 if (DECL_DECOMPOSITION_P (res) && DECL_DECOMP_BASE (res))
9784 TREE_USED (DECL_DECOMP_BASE (res)) = true;
9785
9786 if (DECL_CLONED_FUNCTION_P (res))
9787 TREE_USED (DECL_CLONED_FUNCTION (res)) = true;
9788 }
9789
9790 dump.outdent ();
9791 return res;
9792 }
9793
9794 void
9795 trees_out::tpl_parms (tree parms, unsigned &tpl_levels)
9796 {
9797 if (!parms)
9798 return;
9799
9800 if (TREE_VISITED (parms))
9801 {
9802 ref_node (parms);
9803 return;
9804 }
9805
9806 tpl_parms (TREE_CHAIN (parms), tpl_levels);
9807
9808 tree vec = TREE_VALUE (parms);
9809 unsigned len = TREE_VEC_LENGTH (vec);
9810 /* Depth. */
9811 int tag = insert (parms);
9812 if (streaming_p ())
9813 {
9814 i (len + 1);
9815 dump (dumper::TREE)
9816 && dump ("Writing template parms:%d level:%N length:%d",
9817 tag, TREE_PURPOSE (parms), len);
9818 }
9819 tree_node (TREE_PURPOSE (parms));
9820
9821 for (unsigned ix = 0; ix != len; ix++)
9822 {
9823 tree parm = TREE_VEC_ELT (vec, ix);
9824 tree decl = TREE_VALUE (parm);
9825
9826 gcc_checking_assert (DECL_TEMPLATE_PARM_P (decl));
9827 if (CHECKING_P)
9828 switch (TREE_CODE (decl))
9829 {
9830 default: gcc_unreachable ();
9831
9832 case TEMPLATE_DECL:
9833 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TEMPLATE_PARM)
9834 && (TREE_CODE (DECL_TEMPLATE_RESULT (decl)) == TYPE_DECL)
9835 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
9836 break;
9837
9838 case TYPE_DECL:
9839 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TYPE_PARM)
9840 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
9841 break;
9842
9843 case PARM_DECL:
9844 gcc_assert ((TREE_CODE (DECL_INITIAL (decl)) == TEMPLATE_PARM_INDEX)
9845 && (TREE_CODE (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))
9846 == CONST_DECL)
9847 && (DECL_TEMPLATE_PARM_P
9848 (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))));
9849 break;
9850 }
9851
9852 tree_node (decl);
9853 tree_node (TEMPLATE_PARM_CONSTRAINTS (parm));
9854 }
9855
9856 tpl_levels++;
9857 }
9858
9859 tree
9860 trees_in::tpl_parms (unsigned &tpl_levels)
9861 {
9862 tree parms = NULL_TREE;
9863
9864 while (int len = i ())
9865 {
9866 if (len < 0)
9867 {
9868 parms = back_ref (len);
9869 continue;
9870 }
9871
9872 len -= 1;
9873 parms = tree_cons (NULL_TREE, NULL_TREE, parms);
9874 int tag = insert (parms);
9875 TREE_PURPOSE (parms) = tree_node ();
9876
9877 dump (dumper::TREE)
9878 && dump ("Reading template parms:%d level:%N length:%d",
9879 tag, TREE_PURPOSE (parms), len);
9880
9881 tree vec = make_tree_vec (len);
9882 for (int ix = 0; ix != len; ix++)
9883 {
9884 tree decl = tree_node ();
9885 if (!decl)
9886 return NULL_TREE;
9887
9888 tree parm = build_tree_list (NULL, decl);
9889 TEMPLATE_PARM_CONSTRAINTS (parm) = tree_node ();
9890
9891 TREE_VEC_ELT (vec, ix) = parm;
9892 }
9893
9894 TREE_VALUE (parms) = vec;
9895 tpl_levels++;
9896 }
9897
9898 return parms;
9899 }
9900
9901 void
9902 trees_out::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
9903 {
9904 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
9905 tpl_levels--; parms = TREE_CHAIN (parms))
9906 {
9907 tree vec = TREE_VALUE (parms);
9908
9909 tree_node (TREE_TYPE (vec));
9910 tree dflt = error_mark_node;
9911 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
9912 {
9913 tree parm = TREE_VEC_ELT (vec, ix);
9914 if (dflt)
9915 {
9916 dflt = TREE_PURPOSE (parm);
9917 tree_node (dflt);
9918 }
9919
9920 if (streaming_p ())
9921 {
9922 tree decl = TREE_VALUE (parm);
9923 if (TREE_CODE (decl) == TEMPLATE_DECL)
9924 {
9925 tree ctx = DECL_CONTEXT (decl);
9926 tree inner = DECL_TEMPLATE_RESULT (decl);
9927 tree tpi = (TREE_CODE (inner) == TYPE_DECL
9928 ? TEMPLATE_TYPE_PARM_INDEX (TREE_TYPE (decl))
9929 : DECL_INITIAL (inner));
9930 bool original = (TEMPLATE_PARM_LEVEL (tpi)
9931 == TEMPLATE_PARM_ORIG_LEVEL (tpi));
9932 /* Original template template parms have a context
9933 of their owning template. Reduced ones do not. */
9934 gcc_checking_assert (original ? ctx == tmpl : !ctx);
9935 }
9936 }
9937 }
9938 }
9939 }
9940
9941 bool
9942 trees_in::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
9943 {
9944 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
9945 tpl_levels--; parms = TREE_CHAIN (parms))
9946 {
9947 tree vec = TREE_VALUE (parms);
9948 tree dflt = error_mark_node;
9949
9950 TREE_TYPE (vec) = tree_node ();
9951 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
9952 {
9953 tree parm = TREE_VEC_ELT (vec, ix);
9954 if (dflt)
9955 {
9956 dflt = tree_node ();
9957 if (get_overrun ())
9958 return false;
9959 TREE_PURPOSE (parm) = dflt;
9960 }
9961
9962 tree decl = TREE_VALUE (parm);
9963 if (TREE_CODE (decl) == TEMPLATE_DECL)
9964 {
9965 tree inner = DECL_TEMPLATE_RESULT (decl);
9966 tree tpi = (TREE_CODE (inner) == TYPE_DECL
9967 ? TEMPLATE_TYPE_PARM_INDEX (TREE_TYPE (decl))
9968 : DECL_INITIAL (inner));
9969 bool original = (TEMPLATE_PARM_LEVEL (tpi)
9970 == TEMPLATE_PARM_ORIG_LEVEL (tpi));
9971 /* Original template template parms have a context
9972 of their owning template. Reduced ones do not. */
9973 if (original)
9974 DECL_CONTEXT (decl) = tmpl;
9975 }
9976 }
9977 }
9978 return true;
9979 }
9980
9981 /* PARMS is a LIST, one node per level.
9982 TREE_VALUE is a TREE_VEC of parm info for that level.
9983 each ELT is a TREE_LIST
9984 TREE_VALUE is PARM_DECL, TYPE_DECL or TEMPLATE_DECL
9985 TREE_PURPOSE is the default value. */
9986
9987 void
9988 trees_out::tpl_header (tree tpl, unsigned *tpl_levels)
9989 {
9990 tree parms = DECL_TEMPLATE_PARMS (tpl);
9991 tpl_parms (parms, *tpl_levels);
9992
9993 /* Mark end. */
9994 if (streaming_p ())
9995 u (0);
9996
9997 if (*tpl_levels)
9998 tree_node (TEMPLATE_PARMS_CONSTRAINTS (parms));
9999 }
10000
10001 bool
10002 trees_in::tpl_header (tree tpl, unsigned *tpl_levels)
10003 {
10004 tree parms = tpl_parms (*tpl_levels);
10005 if (!parms)
10006 return false;
10007
10008 DECL_TEMPLATE_PARMS (tpl) = parms;
10009
10010 if (*tpl_levels)
10011 TEMPLATE_PARMS_CONSTRAINTS (parms) = tree_node ();
10012
10013 return true;
10014 }
10015
10016 /* Stream skeleton parm nodes, with their flags, type & parm indices.
10017 All the parms will have consecutive tags. */
10018
10019 void
10020 trees_out::fn_parms_init (tree fn)
10021 {
10022 /* First init them. */
10023 int base_tag = ref_num - 1;
10024 int ix = 0;
10025 for (tree parm = DECL_ARGUMENTS (fn);
10026 parm; parm = DECL_CHAIN (parm), ix++)
10027 {
10028 if (streaming_p ())
10029 {
10030 start (parm);
10031 tree_node_bools (parm);
10032 }
10033 int tag = insert (parm);
10034 gcc_checking_assert (base_tag - ix == tag);
10035 }
10036 /* Mark the end. */
10037 if (streaming_p ())
10038 u (0);
10039
10040 /* Now stream their contents. */
10041 ix = 0;
10042 for (tree parm = DECL_ARGUMENTS (fn);
10043 parm; parm = DECL_CHAIN (parm), ix++)
10044 {
10045 if (streaming_p ())
10046 dump (dumper::TREE)
10047 && dump ("Writing parm:%d %u (%N) of %N",
10048 base_tag - ix, ix, parm, fn);
10049 tree_node_vals (parm);
10050 }
10051 }
10052
10053 /* Build skeleton parm nodes, read their flags, type & parm indices. */
10054
10055 int
10056 trees_in::fn_parms_init (tree fn)
10057 {
10058 int base_tag = ~(int)back_refs.length ();
10059
10060 tree *parm_ptr = &DECL_ARGUMENTS (fn);
10061 int ix = 0;
10062 for (; int code = u (); ix++)
10063 {
10064 tree parm = start (code);
10065 if (!tree_node_bools (parm))
10066 return 0;
10067
10068 int tag = insert (parm);
10069 gcc_checking_assert (base_tag - ix == tag);
10070 *parm_ptr = parm;
10071 parm_ptr = &DECL_CHAIN (parm);
10072 }
10073
10074 ix = 0;
10075 for (tree parm = DECL_ARGUMENTS (fn);
10076 parm; parm = DECL_CHAIN (parm), ix++)
10077 {
10078 dump (dumper::TREE)
10079 && dump ("Reading parm:%d %u (%N) of %N",
10080 base_tag - ix, ix, parm, fn);
10081 if (!tree_node_vals (parm))
10082 return 0;
10083 }
10084
10085 return base_tag;
10086 }
10087
10088 /* Read the remaining parm node data. Replace with existing (if
10089 non-null) in the map. */
10090
10091 void
10092 trees_in::fn_parms_fini (int tag, tree fn, tree existing, bool is_defn)
10093 {
10094 tree existing_parm = existing ? DECL_ARGUMENTS (existing) : NULL_TREE;
10095 tree parms = DECL_ARGUMENTS (fn);
10096 unsigned ix = 0;
10097 for (tree parm = parms; parm; parm = DECL_CHAIN (parm), ix++)
10098 {
10099 if (existing_parm)
10100 {
10101 if (is_defn && !DECL_SAVED_TREE (existing))
10102 {
10103 /* If we're about to become the definition, set the
10104 names of the parms from us. */
10105 DECL_NAME (existing_parm) = DECL_NAME (parm);
10106 DECL_SOURCE_LOCATION (existing_parm) = DECL_SOURCE_LOCATION (parm);
10107 }
10108
10109 back_refs[~tag] = existing_parm;
10110 existing_parm = DECL_CHAIN (existing_parm);
10111 }
10112 tag--;
10113 }
10114 }
10115
10116 /* DEP is the depset of some decl we're streaming by value. Determine
10117 the merging behaviour. */
10118
10119 merge_kind
10120 trees_out::get_merge_kind (tree decl, depset *dep)
10121 {
10122 if (!dep)
10123 {
10124 if (VAR_OR_FUNCTION_DECL_P (decl))
10125 {
10126 /* Any var or function with template info should have DEP. */
10127 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
10128 || !DECL_TEMPLATE_INFO (decl));
10129 if (DECL_LOCAL_DECL_P (decl))
10130 return MK_unique;
10131 }
10132
10133 /* Either unique, or some member of a class that cannot have an
10134 out-of-class definition. For instance a FIELD_DECL. */
10135 tree ctx = CP_DECL_CONTEXT (decl);
10136 if (TREE_CODE (ctx) == FUNCTION_DECL)
10137 {
10138 /* USING_DECLs cannot have DECL_TEMPLATE_INFO -- this isn't
10139 permitting them to have one. */
10140 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
10141 || !DECL_LANG_SPECIFIC (decl)
10142 || !DECL_TEMPLATE_INFO (decl));
10143
10144 return MK_unique;
10145 }
10146
10147 if (TREE_CODE (decl) == TEMPLATE_DECL
10148 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10149 return MK_local_friend;
10150
10151 gcc_checking_assert (TYPE_P (ctx));
10152 if (TREE_CODE (decl) == USING_DECL)
10153 return MK_field;
10154
10155 if (TREE_CODE (decl) == FIELD_DECL)
10156 {
10157 if (DECL_NAME (decl))
10158 {
10159 /* Anonymous FIELD_DECLs have a NULL name. */
10160 gcc_checking_assert (!IDENTIFIER_ANON_P (DECL_NAME (decl)));
10161 return MK_named;
10162 }
10163
10164 if (!DECL_NAME (decl)
10165 && !RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
10166 && !DECL_BIT_FIELD_REPRESENTATIVE (decl))
10167 {
10168 /* The underlying storage unit for a bitfield. We do not
10169 need to dedup it, because it's only reachable through
10170 the bitfields it represents. And those are deduped. */
10171 // FIXME: Is that assertion correct -- do we ever fish it
10172 // out and put it in an expr?
10173 gcc_checking_assert ((TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE
10174 ? TREE_CODE (TREE_TYPE (TREE_TYPE (decl)))
10175 : TREE_CODE (TREE_TYPE (decl)))
10176 == INTEGER_TYPE);
10177 return MK_unique;
10178 }
10179
10180 return MK_field;
10181 }
10182
10183 if (TREE_CODE (decl) == CONST_DECL)
10184 return MK_named;
10185
10186 if (TREE_CODE (decl) == VAR_DECL
10187 && DECL_VTABLE_OR_VTT_P (decl))
10188 return MK_vtable;
10189
10190 if (DECL_THUNK_P (decl))
10191 /* Thunks are unique-enough, because they're only referenced
10192 from the vtable. And that's either new (so we want the
10193 thunks), or it's a duplicate (so it will be dropped). */
10194 return MK_unique;
10195
10196 /* There should be no other cases. */
10197 gcc_unreachable ();
10198 }
10199
10200 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
10201 && TREE_CODE (decl) != USING_DECL
10202 && TREE_CODE (decl) != CONST_DECL);
10203
10204 if (is_key_order ())
10205 {
10206 /* When doing the mergeablilty graph, there's an indirection to
10207 the actual depset. */
10208 gcc_assert (dep->is_special ());
10209 dep = dep->deps[0];
10210 }
10211
10212 gcc_checking_assert (decl == dep->get_entity ());
10213
10214 merge_kind mk = MK_named;
10215 switch (dep->get_entity_kind ())
10216 {
10217 default:
10218 gcc_unreachable ();
10219
10220 case depset::EK_PARTIAL:
10221 mk = MK_partial;
10222 break;
10223
10224 case depset::EK_DECL:
10225 {
10226 tree ctx = CP_DECL_CONTEXT (decl);
10227
10228 switch (TREE_CODE (ctx))
10229 {
10230 default:
10231 gcc_unreachable ();
10232
10233 case FUNCTION_DECL:
10234 // FIXME: This can occur for (a) voldemorty TYPE_DECLS
10235 // (which are returned from a function), or (b)
10236 // block-scope class definitions in template functions.
10237 // These are as unique as the containing function. While
10238 // on read-back we can discover if the CTX was a
10239 // duplicate, we don't have a mechanism to get from the
10240 // existing CTX to the existing version of this decl.
10241 gcc_checking_assert
10242 (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl)));
10243
10244 mk = MK_unique;
10245 break;
10246
10247 case RECORD_TYPE:
10248 case UNION_TYPE:
10249 if (DECL_NAME (decl) == as_base_identifier)
10250 mk = MK_as_base;
10251 else if (IDENTIFIER_ANON_P (DECL_NAME (decl)))
10252 mk = MK_field;
10253 break;
10254
10255 case NAMESPACE_DECL:
10256 if (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl))
10257 && LAMBDA_TYPE_P (TREE_TYPE (decl)))
10258 if (tree scope
10259 = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10260 (TREE_TYPE (decl))))
10261 if (TREE_CODE (scope) == VAR_DECL
10262 && DECL_MODULE_ATTACHMENTS_P (scope))
10263 {
10264 mk = MK_attached;
10265 break;
10266 }
10267
10268 if (TREE_CODE (decl) == TEMPLATE_DECL
10269 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10270 mk = MK_local_friend;
10271 else if (IDENTIFIER_ANON_P (DECL_NAME (decl)))
10272 {
10273 if (DECL_IMPLICIT_TYPEDEF_P (decl)
10274 && UNSCOPED_ENUM_P (TREE_TYPE (decl))
10275 && TYPE_VALUES (TREE_TYPE (decl)))
10276 /* Keyed by first enum value, and underlying type. */
10277 mk = MK_enum;
10278 else
10279 /* No way to merge it, it is an ODR land-mine. */
10280 mk = MK_unique;
10281 }
10282 }
10283 }
10284 break;
10285
10286 case depset::EK_SPECIALIZATION:
10287 {
10288 gcc_checking_assert (dep->is_special ());
10289 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10290
10291 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
10292 /* An block-scope classes of templates are themselves
10293 templates. */
10294 gcc_checking_assert (DECL_IMPLICIT_TYPEDEF_P (decl));
10295
10296 if (dep->is_friend_spec ())
10297 mk = MK_friend_spec;
10298 else if (dep->is_type_spec ())
10299 mk = MK_type_spec;
10300 else if (dep->is_alias ())
10301 mk = MK_alias_spec;
10302 else
10303 mk = MK_decl_spec;
10304
10305 if (TREE_CODE (decl) == TEMPLATE_DECL)
10306 {
10307 tree res = DECL_TEMPLATE_RESULT (decl);
10308 if (!(mk & MK_tmpl_decl_mask))
10309 res = TREE_TYPE (res);
10310
10311 if (res == entry->spec)
10312 /* We check we can get back to the template during
10313 streaming. */
10314 mk = merge_kind (mk | MK_tmpl_tmpl_mask);
10315 }
10316 }
10317 break;
10318 }
10319
10320 return mk;
10321 }
10322
10323
10324 /* The container of DECL -- not necessarily its context! */
10325
10326 tree
10327 trees_out::decl_container (tree decl)
10328 {
10329 int use_tpl;
10330 tree tpl = NULL_TREE;
10331 if (tree template_info = node_template_info (decl, use_tpl))
10332 tpl = TI_TEMPLATE (template_info);
10333 if (tpl == decl)
10334 tpl = nullptr;
10335
10336 /* Stream the template we're instantiated from. */
10337 tree_node (tpl);
10338
10339 tree container = NULL_TREE;
10340 if (TREE_CODE (decl) == TEMPLATE_DECL
10341 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10342 container = DECL_CHAIN (decl);
10343 else
10344 container = CP_DECL_CONTEXT (decl);
10345
10346 if (TYPE_P (container))
10347 container = TYPE_NAME (container);
10348
10349 tree_node (container);
10350
10351 return container;
10352 }
10353
10354 tree
10355 trees_in::decl_container ()
10356 {
10357 /* The maybe-template. */
10358 (void)tree_node ();
10359
10360 tree container = tree_node ();
10361
10362 return container;
10363 }
10364
10365 /* Write out key information about a mergeable DEP. Does not write
10366 the contents of DEP itself. The context has already been
10367 written. The container has already been streamed. */
10368
10369 void
10370 trees_out::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
10371 tree container, depset *dep)
10372 {
10373 if (dep && is_key_order ())
10374 {
10375 gcc_checking_assert (dep->is_special ());
10376 dep = dep->deps[0];
10377 }
10378
10379 if (streaming_p ())
10380 dump (dumper::MERGE)
10381 && dump ("Writing:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
10382 dep ? dep->entity_kind_name () : "contained",
10383 TREE_CODE (decl), decl);
10384
10385 /* Now write the locating information. */
10386 if (mk & MK_template_mask)
10387 {
10388 /* Specializations are located via their originating template,
10389 and the set of template args they specialize. */
10390 gcc_checking_assert (dep && dep->is_special ());
10391 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10392
10393 tree_node (entry->tmpl);
10394 tree_node (entry->args);
10395 if (streaming_p ())
10396 u (get_mergeable_specialization_flags (entry->tmpl, decl));
10397 if (mk & MK_tmpl_decl_mask)
10398 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
10399 {
10400 /* Variable template partial specializations might need
10401 constraints (see spec_hasher::equal). It's simpler to
10402 write NULL when we don't need them. */
10403 tree constraints = NULL_TREE;
10404
10405 if (uses_template_parms (entry->args))
10406 constraints = get_constraints (inner);
10407 tree_node (constraints);
10408 }
10409
10410 if (CHECKING_P)
10411 {
10412 /* Make sure we can locate the decl. */
10413 tree existing = match_mergeable_specialization
10414 (bool (mk & MK_tmpl_decl_mask), entry, false);
10415
10416 gcc_assert (existing);
10417 if (mk & MK_tmpl_decl_mask)
10418 {
10419 if (mk & MK_tmpl_alias_mask)
10420 /* It should be in both tables. */
10421 gcc_assert (match_mergeable_specialization (false, entry, false)
10422 == TREE_TYPE (existing));
10423 else if (mk & MK_tmpl_tmpl_mask)
10424 if (tree ti = DECL_TEMPLATE_INFO (existing))
10425 existing = TI_TEMPLATE (ti);
10426 }
10427 else
10428 {
10429 if (!(mk & MK_tmpl_tmpl_mask))
10430 existing = TYPE_NAME (existing);
10431 else if (tree ti = CLASSTYPE_TEMPLATE_INFO (existing))
10432 existing = TI_TEMPLATE (ti);
10433 }
10434
10435 /* The walkabout should have found ourselves. */
10436 gcc_assert (existing == decl);
10437 }
10438 }
10439 else if (mk != MK_unique)
10440 {
10441 merge_key key;
10442 tree name = DECL_NAME (decl);
10443
10444 switch (mk)
10445 {
10446 default:
10447 gcc_unreachable ();
10448
10449 case MK_named:
10450 case MK_friend_spec:
10451 if (IDENTIFIER_CONV_OP_P (name))
10452 name = conv_op_identifier;
10453
10454 if (inner && TREE_CODE (inner) == FUNCTION_DECL)
10455 {
10456 /* Functions are distinguished by parameter types. */
10457 tree fn_type = TREE_TYPE (inner);
10458
10459 key.ref_q = type_memfn_rqual (fn_type);
10460 key.args = TYPE_ARG_TYPES (fn_type);
10461
10462 if (tree reqs = get_constraints (inner))
10463 {
10464 if (cxx_dialect < cxx20)
10465 reqs = CI_ASSOCIATED_CONSTRAINTS (reqs);
10466 else
10467 reqs = CI_DECLARATOR_REQS (reqs);
10468 key.constraints = reqs;
10469 }
10470
10471 if (IDENTIFIER_CONV_OP_P (name)
10472 || (decl != inner
10473 && !(name == fun_identifier
10474 /* In case the user names something _FUN */
10475 && LAMBDA_TYPE_P (DECL_CONTEXT (inner)))))
10476 /* And a function template, or conversion operator needs
10477 the return type. Except for the _FUN thunk of a
10478 generic lambda, which has a recursive decl_type'd
10479 return type. */
10480 // FIXME: What if the return type is a voldemort?
10481 key.ret = fndecl_declared_return_type (inner);
10482 }
10483
10484 if (mk == MK_friend_spec)
10485 {
10486 gcc_checking_assert (dep && dep->is_special ());
10487 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10488
10489 tree_node (entry->tmpl);
10490 tree_node (entry->args);
10491 if (streaming_p ())
10492 u (get_mergeable_specialization_flags (entry->tmpl, decl));
10493 }
10494 break;
10495
10496 case MK_field:
10497 {
10498 unsigned ix = 0;
10499 if (TREE_CODE (inner) != FIELD_DECL)
10500 name = NULL_TREE;
10501 else
10502 gcc_checking_assert (!name || !IDENTIFIER_ANON_P (name));
10503
10504 for (tree field = TYPE_FIELDS (TREE_TYPE (container));
10505 ; field = DECL_CHAIN (field))
10506 {
10507 tree finner = STRIP_TEMPLATE (field);
10508 if (TREE_CODE (finner) == TREE_CODE (inner))
10509 {
10510 if (finner == inner)
10511 break;
10512 ix++;
10513 }
10514 }
10515 key.index = ix;
10516 }
10517 break;
10518
10519 case MK_vtable:
10520 {
10521 tree vtable = CLASSTYPE_VTABLES (TREE_TYPE (container));
10522 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
10523 if (vtable == decl)
10524 {
10525 key.index = ix;
10526 break;
10527 }
10528 name = NULL_TREE;
10529 }
10530 break;
10531
10532 case MK_as_base:
10533 gcc_checking_assert
10534 (decl == TYPE_NAME (CLASSTYPE_AS_BASE (TREE_TYPE (container))));
10535 break;
10536
10537 case MK_local_friend:
10538 {
10539 /* Find by index on the class's DECL_LIST */
10540 unsigned ix = 0;
10541 for (tree decls = CLASSTYPE_DECL_LIST (TREE_CHAIN (decl));
10542 decls; decls = TREE_CHAIN (decls))
10543 if (!TREE_PURPOSE (decls))
10544 {
10545 tree frnd = friend_from_decl_list (TREE_VALUE (decls));
10546 if (frnd == decl)
10547 break;
10548 ix++;
10549 }
10550 key.index = ix;
10551 name = NULL_TREE;
10552 }
10553 break;
10554
10555 case MK_enum:
10556 {
10557 /* Anonymous enums are located by their first identifier,
10558 and underlying type. */
10559 tree type = TREE_TYPE (decl);
10560
10561 gcc_checking_assert (UNSCOPED_ENUM_P (type));
10562 /* Using the type name drops the bit precision we might
10563 have been using on the enum. */
10564 key.ret = TYPE_NAME (ENUM_UNDERLYING_TYPE (type));
10565 if (tree values = TYPE_VALUES (type))
10566 name = DECL_NAME (TREE_VALUE (values));
10567 }
10568 break;
10569
10570 case MK_attached:
10571 {
10572 gcc_checking_assert (LAMBDA_TYPE_P (TREE_TYPE (inner)));
10573 tree scope = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10574 (TREE_TYPE (inner)));
10575 gcc_checking_assert (TREE_CODE (scope) == VAR_DECL);
10576 attachset *root = attached_table->get (DECL_UID (scope));
10577 unsigned ix = root->num;
10578 /* If we don't find it, we'll write a really big number
10579 that the reader will ignore. */
10580 while (ix--)
10581 if (root->values[ix] == inner)
10582 break;
10583
10584 /* Use the attached-to decl as the 'name'. */
10585 name = scope;
10586 key.index = ix;
10587 }
10588 break;
10589
10590 case MK_partial:
10591 {
10592 key.constraints = get_constraints (inner);
10593 key.ret = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (inner));
10594 key.args = CLASSTYPE_TI_ARGS (TREE_TYPE (inner));
10595 }
10596 break;
10597 }
10598
10599 tree_node (name);
10600 if (streaming_p ())
10601 {
10602 unsigned code = (key.ref_q << 0) | (key.index << 2);
10603 u (code);
10604 }
10605
10606 if (mk == MK_enum)
10607 tree_node (key.ret);
10608 else if (mk == MK_partial
10609 || (mk == MK_named && inner
10610 && TREE_CODE (inner) == FUNCTION_DECL))
10611 {
10612 tree_node (key.ret);
10613 tree arg = key.args;
10614 if (mk == MK_named)
10615 while (arg && arg != void_list_node)
10616 {
10617 tree_node (TREE_VALUE (arg));
10618 arg = TREE_CHAIN (arg);
10619 }
10620 tree_node (arg);
10621 tree_node (key.constraints);
10622 }
10623 }
10624 }
10625
10626 /* DECL is a new declaration that may be duplicated in OVL. Use RET &
10627 ARGS to find its clone, or NULL. If DECL's DECL_NAME is NULL, this
10628 has been found by a proxy. It will be an enum type located by it's
10629 first member.
10630
10631 We're conservative with matches, so ambiguous decls will be
10632 registered as different, then lead to a lookup error if the two
10633 modules are both visible. Perhaps we want to do something similar
10634 to duplicate decls to get ODR errors on loading? We already have
10635 some special casing for namespaces. */
10636
10637 static tree
10638 check_mergeable_decl (merge_kind mk, tree decl, tree ovl, merge_key const &key)
10639 {
10640 tree found = NULL_TREE;
10641 for (ovl_iterator iter (ovl); !found && iter; ++iter)
10642 {
10643 tree match = *iter;
10644
10645 tree d_inner = decl;
10646 tree m_inner = match;
10647
10648 again:
10649 if (TREE_CODE (d_inner) != TREE_CODE (m_inner))
10650 {
10651 if (TREE_CODE (match) == NAMESPACE_DECL
10652 && !DECL_NAMESPACE_ALIAS (match))
10653 /* Namespaces are never overloaded. */
10654 found = match;
10655
10656 continue;
10657 }
10658
10659 switch (TREE_CODE (d_inner))
10660 {
10661 case TEMPLATE_DECL:
10662 if (template_heads_equivalent_p (d_inner, m_inner))
10663 {
10664 d_inner = DECL_TEMPLATE_RESULT (d_inner);
10665 m_inner = DECL_TEMPLATE_RESULT (m_inner);
10666 if (d_inner == error_mark_node
10667 && TYPE_DECL_ALIAS_P (m_inner))
10668 {
10669 found = match;
10670 break;
10671 }
10672 goto again;
10673 }
10674 break;
10675
10676 case FUNCTION_DECL:
10677 map_context_from = d_inner;
10678 map_context_to = m_inner;
10679 if (tree m_type = TREE_TYPE (m_inner))
10680 if ((!key.ret
10681 || same_type_p (key.ret, fndecl_declared_return_type (m_inner)))
10682 && type_memfn_rqual (m_type) == key.ref_q
10683 && compparms (key.args, TYPE_ARG_TYPES (m_type))
10684 /* Reject if old is a "C" builtin and new is not "C".
10685 Matches decls_match behaviour. */
10686 && (!DECL_IS_UNDECLARED_BUILTIN (m_inner)
10687 || !DECL_EXTERN_C_P (m_inner)
10688 || DECL_EXTERN_C_P (d_inner)))
10689 {
10690 tree m_reqs = get_constraints (m_inner);
10691 if (m_reqs)
10692 {
10693 if (cxx_dialect < cxx20)
10694 m_reqs = CI_ASSOCIATED_CONSTRAINTS (m_reqs);
10695 else
10696 m_reqs = CI_DECLARATOR_REQS (m_reqs);
10697 }
10698
10699 if (cp_tree_equal (key.constraints, m_reqs))
10700 found = match;
10701 }
10702 map_context_from = map_context_to = NULL_TREE;
10703 break;
10704
10705 case TYPE_DECL:
10706 if (DECL_IMPLICIT_TYPEDEF_P (d_inner)
10707 == DECL_IMPLICIT_TYPEDEF_P (m_inner))
10708 {
10709 if (!IDENTIFIER_ANON_P (DECL_NAME (m_inner)))
10710 return match;
10711 else if (mk == MK_enum
10712 && (TYPE_NAME (ENUM_UNDERLYING_TYPE (TREE_TYPE (m_inner)))
10713 == key.ret))
10714 found = match;
10715 }
10716 break;
10717
10718 default:
10719 found = match;
10720 break;
10721 }
10722 }
10723
10724 return found;
10725 }
10726
10727 /* DECL, INNER & TYPE are a skeleton set of nodes for a decl. Only
10728 the bools have been filled in. Read its merging key and merge it.
10729 Returns the existing decl if there is one. */
10730
10731 tree
10732 trees_in::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
10733 tree type, tree container, bool is_mod)
10734 {
10735 const char *kind = "new";
10736 tree existing = NULL_TREE;
10737
10738 if (mk & MK_template_mask)
10739 {
10740 spec_entry spec;
10741 spec.tmpl = tree_node ();
10742 spec.args = tree_node ();
10743 unsigned flags = u ();
10744
10745 DECL_NAME (decl) = DECL_NAME (spec.tmpl);
10746 DECL_CONTEXT (decl) = DECL_CONTEXT (spec.tmpl);
10747 DECL_NAME (inner) = DECL_NAME (decl);
10748 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
10749
10750 spec.spec = decl;
10751 if (mk & MK_tmpl_tmpl_mask)
10752 {
10753 if (inner == decl)
10754 return error_mark_node;
10755 spec.spec = inner;
10756 }
10757 tree constr = NULL_TREE;
10758 bool is_decl = mk & MK_tmpl_decl_mask;
10759 if (is_decl)
10760 {
10761 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
10762 {
10763 constr = tree_node ();
10764 if (constr)
10765 set_constraints (inner, constr);
10766 }
10767 }
10768 else
10769 {
10770 if (mk == MK_type_spec && inner != decl)
10771 return error_mark_node;
10772 spec.spec = type;
10773 }
10774 existing = match_mergeable_specialization (is_decl, &spec);
10775 if (constr)
10776 /* We'll add these back later, if this is the new decl. */
10777 remove_constraints (inner);
10778
10779 if (!existing)
10780 add_mergeable_specialization (spec.tmpl, spec.args, decl, flags);
10781 else if (mk & MK_tmpl_decl_mask)
10782 {
10783 /* A declaration specialization. */
10784 if (mk & MK_tmpl_tmpl_mask)
10785 if (tree ti = DECL_TEMPLATE_INFO (existing))
10786 {
10787 tree tmpl = TI_TEMPLATE (ti);
10788 if (DECL_TEMPLATE_RESULT (tmpl) == existing)
10789 existing = tmpl;
10790 }
10791 }
10792 else
10793 {
10794 /* A type specialization. */
10795 if (!(mk & MK_tmpl_tmpl_mask))
10796 existing = TYPE_NAME (existing);
10797 else if (tree ti = CLASSTYPE_TEMPLATE_INFO (existing))
10798 {
10799 tree tmpl = TI_TEMPLATE (ti);
10800 if (DECL_TEMPLATE_RESULT (tmpl) == TYPE_NAME (existing))
10801 existing = tmpl;
10802 }
10803 }
10804 }
10805 else if (mk == MK_unique)
10806 kind = "unique";
10807 else
10808 {
10809 tree name = tree_node ();
10810
10811 merge_key key;
10812 unsigned code = u ();
10813 key.ref_q = cp_ref_qualifier ((code >> 0) & 3);
10814 key.index = code >> 2;
10815
10816 if (mk == MK_enum)
10817 key.ret = tree_node ();
10818 else if (mk == MK_partial
10819 || ((mk == MK_named || mk == MK_friend_spec)
10820 && inner && TREE_CODE (inner) == FUNCTION_DECL))
10821 {
10822 key.ret = tree_node ();
10823 tree arg, *arg_ptr = &key.args;
10824 while ((arg = tree_node ())
10825 && arg != void_list_node
10826 && mk != MK_partial)
10827 {
10828 *arg_ptr = tree_cons (NULL_TREE, arg, NULL_TREE);
10829 arg_ptr = &TREE_CHAIN (*arg_ptr);
10830 }
10831 *arg_ptr = arg;
10832 key.constraints = tree_node ();
10833 }
10834
10835 if (get_overrun ())
10836 return error_mark_node;
10837
10838 if (mk < MK_indirect_lwm)
10839 {
10840 DECL_NAME (decl) = name;
10841 DECL_CONTEXT (decl) = FROB_CONTEXT (container);
10842 }
10843 if (inner)
10844 {
10845 DECL_NAME (inner) = DECL_NAME (decl);
10846 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
10847 }
10848
10849 if (mk == MK_partial)
10850 {
10851 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (key.ret);
10852 spec; spec = TREE_CHAIN (spec))
10853 {
10854 tree tmpl = TREE_VALUE (spec);
10855 if (template_args_equal (key.args,
10856 CLASSTYPE_TI_ARGS (TREE_TYPE (tmpl)))
10857 && cp_tree_equal (key.constraints,
10858 get_constraints
10859 (DECL_TEMPLATE_RESULT (tmpl))))
10860 {
10861 existing = tmpl;
10862 break;
10863 }
10864 }
10865 if (!existing)
10866 add_mergeable_specialization (key.ret, key.args, decl, 2);
10867 }
10868 else
10869 switch (TREE_CODE (container))
10870 {
10871 default:
10872 gcc_unreachable ();
10873
10874 case NAMESPACE_DECL:
10875 if (mk == MK_attached)
10876 {
10877 if (DECL_LANG_SPECIFIC (name)
10878 && VAR_OR_FUNCTION_DECL_P (name)
10879 && DECL_MODULE_ATTACHMENTS_P (name))
10880 if (attachset *set = attached_table->get (DECL_UID (name)))
10881 if (key.index < set->num)
10882 {
10883 existing = set->values[key.index];
10884 if (existing)
10885 {
10886 gcc_checking_assert
10887 (DECL_IMPLICIT_TYPEDEF_P (existing));
10888 if (inner != decl)
10889 existing
10890 = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (existing));
10891 }
10892 }
10893 }
10894 else if (is_mod && !(state->is_module () || state->is_partition ()))
10895 kind = "unique";
10896 else
10897 {
10898 gcc_checking_assert (mk == MK_named || mk == MK_enum);
10899 tree mvec;
10900 tree *vslot = mergeable_namespace_slots (container, name,
10901 !is_mod, &mvec);
10902 existing = check_mergeable_decl (mk, decl, *vslot, key);
10903 if (!existing)
10904 add_mergeable_namespace_entity (vslot, decl);
10905 else
10906 {
10907 /* Note that we now have duplicates to deal with in
10908 name lookup. */
10909 if (is_mod)
10910 BINDING_VECTOR_PARTITION_DUPS_P (mvec) = true;
10911 else
10912 BINDING_VECTOR_GLOBAL_DUPS_P (mvec) = true;
10913 }
10914 }
10915 break;
10916
10917 case FUNCTION_DECL:
10918 // FIXME: What about a voldemort? how do we find what it
10919 // duplicates? Do we have to number vmorts relative to
10920 // their containing function? But how would that work
10921 // when matching an in-TU declaration?
10922 kind = "unique";
10923 break;
10924
10925 case TYPE_DECL:
10926 if (is_mod && !(state->is_module () || state->is_partition ())
10927 /* Implicit member functions can come from
10928 anywhere. */
10929 && !(DECL_ARTIFICIAL (decl)
10930 && TREE_CODE (decl) == FUNCTION_DECL
10931 && !DECL_THUNK_P (decl)))
10932 kind = "unique";
10933 else
10934 {
10935 tree ctx = TREE_TYPE (container);
10936
10937 /* For some reason templated enumeral types are not marked
10938 as COMPLETE_TYPE_P, even though they have members.
10939 This may well be a bug elsewhere. */
10940 if (TREE_CODE (ctx) == ENUMERAL_TYPE)
10941 existing = find_enum_member (ctx, name);
10942 else if (COMPLETE_TYPE_P (ctx))
10943 {
10944 switch (mk)
10945 {
10946 default:
10947 gcc_unreachable ();
10948
10949 case MK_named:
10950 existing = lookup_class_binding (ctx, name);
10951 if (existing)
10952 {
10953 tree inner = decl;
10954 if (TREE_CODE (inner) == TEMPLATE_DECL
10955 && !DECL_MEMBER_TEMPLATE_P (inner))
10956 inner = DECL_TEMPLATE_RESULT (inner);
10957
10958 existing = check_mergeable_decl
10959 (mk, inner, existing, key);
10960
10961 if (!existing && DECL_ALIAS_TEMPLATE_P (decl))
10962 {} // FIXME: Insert into specialization
10963 // tables, we'll need the arguments for that!
10964 }
10965 break;
10966
10967 case MK_field:
10968 {
10969 unsigned ix = key.index;
10970 for (tree field = TYPE_FIELDS (ctx);
10971 field; field = DECL_CHAIN (field))
10972 {
10973 tree finner = STRIP_TEMPLATE (field);
10974 if (TREE_CODE (finner) == TREE_CODE (inner))
10975 if (!ix--)
10976 {
10977 existing = field;
10978 break;
10979 }
10980 }
10981 }
10982 break;
10983
10984 case MK_vtable:
10985 {
10986 unsigned ix = key.index;
10987 for (tree vtable = CLASSTYPE_VTABLES (ctx);
10988 vtable; vtable = DECL_CHAIN (vtable))
10989 if (!ix--)
10990 {
10991 existing = vtable;
10992 break;
10993 }
10994 }
10995 break;
10996
10997 case MK_as_base:
10998 {
10999 tree as_base = CLASSTYPE_AS_BASE (ctx);
11000 if (as_base && as_base != ctx)
11001 existing = TYPE_NAME (as_base);
11002 }
11003 break;
11004
11005 case MK_local_friend:
11006 {
11007 unsigned ix = key.index;
11008 for (tree decls = CLASSTYPE_DECL_LIST (ctx);
11009 decls; decls = TREE_CHAIN (decls))
11010 if (!TREE_PURPOSE (decls) && !ix--)
11011 {
11012 existing
11013 = friend_from_decl_list (TREE_VALUE (decls));
11014 break;
11015 }
11016 }
11017 break;
11018 }
11019
11020 if (existing && mk < MK_indirect_lwm && mk != MK_partial
11021 && TREE_CODE (decl) == TEMPLATE_DECL
11022 && !DECL_MEMBER_TEMPLATE_P (decl))
11023 {
11024 tree ti;
11025 if (DECL_IMPLICIT_TYPEDEF_P (existing))
11026 ti = TYPE_TEMPLATE_INFO (TREE_TYPE (existing));
11027 else
11028 ti = DECL_TEMPLATE_INFO (existing);
11029 existing = TI_TEMPLATE (ti);
11030 }
11031 }
11032 }
11033 }
11034
11035 if (mk == MK_friend_spec)
11036 {
11037 spec_entry spec;
11038 spec.tmpl = tree_node ();
11039 spec.args = tree_node ();
11040 spec.spec = decl;
11041 unsigned flags = u ();
11042
11043 tree e = match_mergeable_specialization (true, &spec);
11044 if (!e)
11045 add_mergeable_specialization (spec.tmpl, spec.args,
11046 existing ? existing : decl, flags);
11047 else if (e != existing)
11048 set_overrun ();
11049 }
11050 }
11051
11052 dump (dumper::MERGE)
11053 && dump ("Read:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
11054 existing ? "matched" : kind, TREE_CODE (decl), decl);
11055
11056 return existing;
11057 }
11058
11059 void
11060 trees_out::binfo_mergeable (tree binfo)
11061 {
11062 tree dom = binfo;
11063 while (tree parent = BINFO_INHERITANCE_CHAIN (dom))
11064 dom = parent;
11065 tree type = BINFO_TYPE (dom);
11066 gcc_checking_assert (TYPE_BINFO (type) == dom);
11067 tree_node (type);
11068 if (streaming_p ())
11069 {
11070 unsigned ix = 0;
11071 for (; dom != binfo; dom = TREE_CHAIN (dom))
11072 ix++;
11073 u (ix);
11074 }
11075 }
11076
11077 unsigned
11078 trees_in::binfo_mergeable (tree *type)
11079 {
11080 *type = tree_node ();
11081 return u ();
11082 }
11083
11084 /* DECL is a just streamed mergeable decl that should match EXISTING. Check
11085 it does and issue an appropriate diagnostic if not. Merge any
11086 bits from DECL to EXISTING. This is stricter matching than
11087 decls_match, because we can rely on ODR-sameness, and we cannot use
11088 decls_match because it can cause instantiations of constraints. */
11089
11090 bool
11091 trees_in::is_matching_decl (tree existing, tree decl)
11092 {
11093 // FIXME: We should probably do some duplicate decl-like stuff here
11094 // (beware, default parms should be the same?) Can we just call
11095 // duplicate_decls and teach it how to handle the module-specific
11096 // permitted/required duplications?
11097
11098 // We know at this point that the decls have matched by key, so we
11099 // can elide some of the checking
11100 gcc_checking_assert (TREE_CODE (existing) == TREE_CODE (decl));
11101
11102 tree inner = decl;
11103 if (TREE_CODE (decl) == TEMPLATE_DECL)
11104 {
11105 inner = DECL_TEMPLATE_RESULT (decl);
11106 gcc_checking_assert (TREE_CODE (DECL_TEMPLATE_RESULT (existing))
11107 == TREE_CODE (inner));
11108 }
11109
11110 gcc_checking_assert (!map_context_from);
11111 /* This mapping requres the new decl on the lhs and the existing
11112 entity on the rhs of the comparitors below. */
11113 map_context_from = inner;
11114 map_context_to = STRIP_TEMPLATE (existing);
11115
11116 if (TREE_CODE (inner) == FUNCTION_DECL)
11117 {
11118 tree e_ret = fndecl_declared_return_type (existing);
11119 tree d_ret = fndecl_declared_return_type (decl);
11120
11121 if (decl != inner && DECL_NAME (inner) == fun_identifier
11122 && LAMBDA_TYPE_P (DECL_CONTEXT (inner)))
11123 /* This has a recursive type that will compare different. */;
11124 else if (!same_type_p (d_ret, e_ret))
11125 goto mismatch;
11126
11127 tree e_type = TREE_TYPE (existing);
11128 tree d_type = TREE_TYPE (decl);
11129
11130 if (DECL_EXTERN_C_P (decl) != DECL_EXTERN_C_P (existing))
11131 goto mismatch;
11132
11133 for (tree e_args = TYPE_ARG_TYPES (e_type),
11134 d_args = TYPE_ARG_TYPES (d_type);
11135 e_args != d_args && (e_args || d_args);
11136 e_args = TREE_CHAIN (e_args), d_args = TREE_CHAIN (d_args))
11137 {
11138 if (!(e_args && d_args))
11139 goto mismatch;
11140
11141 if (!same_type_p (TREE_VALUE (d_args), TREE_VALUE (e_args)))
11142 goto mismatch;
11143
11144 // FIXME: Check default values
11145 }
11146
11147 /* If EXISTING has an undeduced or uninstantiated exception
11148 specification, but DECL does not, propagate the exception
11149 specification. Otherwise we end up asserting or trying to
11150 instantiate it in the middle of loading. */
11151 tree e_spec = TYPE_RAISES_EXCEPTIONS (e_type);
11152 tree d_spec = TYPE_RAISES_EXCEPTIONS (d_type);
11153 if (DEFERRED_NOEXCEPT_SPEC_P (e_spec))
11154 {
11155 if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11156 || (UNEVALUATED_NOEXCEPT_SPEC_P (e_spec)
11157 && !UNEVALUATED_NOEXCEPT_SPEC_P (d_spec)))
11158 {
11159 dump (dumper::MERGE)
11160 && dump ("Propagating instantiated noexcept to %N", existing);
11161 TREE_TYPE (existing) = d_type;
11162
11163 /* Propagate to existing clones. */
11164 tree clone;
11165 FOR_EACH_CLONE (clone, existing)
11166 {
11167 if (TREE_TYPE (clone) == e_type)
11168 TREE_TYPE (clone) = d_type;
11169 else
11170 TREE_TYPE (clone)
11171 = build_exception_variant (TREE_TYPE (clone), d_spec);
11172 }
11173 }
11174 }
11175 else if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11176 && !comp_except_specs (d_spec, e_spec, ce_type))
11177 goto mismatch;
11178 }
11179 /* Using cp_tree_equal because we can meet TYPE_ARGUMENT_PACKs
11180 here. I suspect the entities that directly do that are things
11181 that shouldn't go to duplicate_decls (FIELD_DECLs etc). */
11182 else if (!cp_tree_equal (TREE_TYPE (decl), TREE_TYPE (existing)))
11183 {
11184 mismatch:
11185 map_context_from = map_context_to = NULL_TREE;
11186 if (DECL_IS_UNDECLARED_BUILTIN (existing))
11187 /* Just like duplicate_decls, presum the user knows what
11188 they're doing in overriding a builtin. */
11189 TREE_TYPE (existing) = TREE_TYPE (decl);
11190 else
11191 {
11192 // FIXME:QOI Might be template specialization from a module,
11193 // not necessarily global module
11194 error_at (DECL_SOURCE_LOCATION (decl),
11195 "conflicting global module declaration %#qD", decl);
11196 inform (DECL_SOURCE_LOCATION (existing),
11197 "existing declaration %#qD", existing);
11198 return false;
11199 }
11200 }
11201
11202 map_context_from = map_context_to = NULL_TREE;
11203
11204 if (DECL_IS_UNDECLARED_BUILTIN (existing)
11205 && !DECL_IS_UNDECLARED_BUILTIN (decl))
11206 {
11207 /* We're matching a builtin that the user has yet to declare.
11208 We are the one! This is very much duplicate-decl
11209 shenanigans. */
11210 DECL_SOURCE_LOCATION (existing) = DECL_SOURCE_LOCATION (decl);
11211 if (TREE_CODE (decl) != TYPE_DECL)
11212 {
11213 /* Propagate exceptions etc. */
11214 TREE_TYPE (existing) = TREE_TYPE (decl);
11215 TREE_NOTHROW (existing) = TREE_NOTHROW (decl);
11216 }
11217 /* This is actually an import! */
11218 DECL_MODULE_IMPORT_P (existing) = true;
11219
11220 /* Yay, sliced! */
11221 existing->base = decl->base;
11222
11223 if (TREE_CODE (decl) == FUNCTION_DECL)
11224 {
11225 /* Ew :( */
11226 memcpy (&existing->decl_common.size,
11227 &decl->decl_common.size,
11228 (offsetof (tree_decl_common, pt_uid)
11229 - offsetof (tree_decl_common, size)));
11230 auto bltin_class = DECL_BUILT_IN_CLASS (decl);
11231 existing->function_decl.built_in_class = bltin_class;
11232 auto fncode = DECL_UNCHECKED_FUNCTION_CODE (decl);
11233 DECL_UNCHECKED_FUNCTION_CODE (existing) = fncode;
11234 if (existing->function_decl.built_in_class == BUILT_IN_NORMAL)
11235 {
11236 if (builtin_decl_explicit_p (built_in_function (fncode)))
11237 switch (fncode)
11238 {
11239 case BUILT_IN_STPCPY:
11240 set_builtin_decl_implicit_p
11241 (built_in_function (fncode), true);
11242 break;
11243 default:
11244 set_builtin_decl_declared_p
11245 (built_in_function (fncode), true);
11246 break;
11247 }
11248 copy_attributes_to_builtin (decl);
11249 }
11250 }
11251 }
11252
11253 if (VAR_OR_FUNCTION_DECL_P (decl)
11254 && DECL_TEMPLATE_INSTANTIATED (decl))
11255 /* Don't instantiate again! */
11256 DECL_TEMPLATE_INSTANTIATED (existing) = true;
11257
11258 tree e_inner = inner == decl ? existing : DECL_TEMPLATE_RESULT (existing);
11259
11260 if (TREE_CODE (inner) == FUNCTION_DECL
11261 && DECL_DECLARED_INLINE_P (inner))
11262 DECL_DECLARED_INLINE_P (e_inner) = true;
11263 if (!DECL_EXTERNAL (inner))
11264 DECL_EXTERNAL (e_inner) = false;
11265
11266 // FIXME: Check default tmpl and fn parms here
11267
11268 return true;
11269 }
11270
11271 /* FN is an implicit member function that we've discovered is new to
11272 the class. Add it to the TYPE_FIELDS chain and the method vector.
11273 Reset the appropriate classtype lazy flag. */
11274
11275 bool
11276 trees_in::install_implicit_member (tree fn)
11277 {
11278 tree ctx = DECL_CONTEXT (fn);
11279 tree name = DECL_NAME (fn);
11280 /* We know these are synthesized, so the set of expected prototypes
11281 is quite restricted. We're not validating correctness, just
11282 distinguishing beteeen the small set of possibilities. */
11283 tree parm_type = TREE_VALUE (FUNCTION_FIRST_USER_PARMTYPE (fn));
11284 if (IDENTIFIER_CTOR_P (name))
11285 {
11286 if (CLASSTYPE_LAZY_DEFAULT_CTOR (ctx)
11287 && VOID_TYPE_P (parm_type))
11288 CLASSTYPE_LAZY_DEFAULT_CTOR (ctx) = false;
11289 else if (!TYPE_REF_P (parm_type))
11290 return false;
11291 else if (CLASSTYPE_LAZY_COPY_CTOR (ctx)
11292 && !TYPE_REF_IS_RVALUE (parm_type))
11293 CLASSTYPE_LAZY_COPY_CTOR (ctx) = false;
11294 else if (CLASSTYPE_LAZY_MOVE_CTOR (ctx))
11295 CLASSTYPE_LAZY_MOVE_CTOR (ctx) = false;
11296 else
11297 return false;
11298 }
11299 else if (IDENTIFIER_DTOR_P (name))
11300 {
11301 if (CLASSTYPE_LAZY_DESTRUCTOR (ctx))
11302 CLASSTYPE_LAZY_DESTRUCTOR (ctx) = false;
11303 else
11304 return false;
11305 if (DECL_VIRTUAL_P (fn))
11306 /* A virtual dtor should have been created when the class
11307 became complete. */
11308 return false;
11309 }
11310 else if (name == assign_op_identifier)
11311 {
11312 if (!TYPE_REF_P (parm_type))
11313 return false;
11314 else if (CLASSTYPE_LAZY_COPY_ASSIGN (ctx)
11315 && !TYPE_REF_IS_RVALUE (parm_type))
11316 CLASSTYPE_LAZY_COPY_ASSIGN (ctx) = false;
11317 else if (CLASSTYPE_LAZY_MOVE_ASSIGN (ctx))
11318 CLASSTYPE_LAZY_MOVE_ASSIGN (ctx) = false;
11319 else
11320 return false;
11321 }
11322 else
11323 return false;
11324
11325 dump (dumper::MERGE) && dump ("Adding implicit member %N", fn);
11326
11327 DECL_CHAIN (fn) = TYPE_FIELDS (ctx);
11328 TYPE_FIELDS (ctx) = fn;
11329
11330 add_method (ctx, fn, false);
11331
11332 /* Propagate TYPE_FIELDS. */
11333 fixup_type_variants (ctx);
11334
11335 return true;
11336 }
11337
11338 /* Return non-zero if DECL has a definition that would be interesting to
11339 write out. */
11340
11341 static bool
11342 has_definition (tree decl)
11343 {
11344 bool is_tmpl = TREE_CODE (decl) == TEMPLATE_DECL;
11345 if (is_tmpl)
11346 decl = DECL_TEMPLATE_RESULT (decl);
11347
11348 switch (TREE_CODE (decl))
11349 {
11350 default:
11351 break;
11352
11353 case FUNCTION_DECL:
11354 if (!DECL_SAVED_TREE (decl))
11355 /* Not defined. */
11356 break;
11357
11358 if (DECL_DECLARED_INLINE_P (decl))
11359 return true;
11360
11361 if (DECL_THIS_STATIC (decl)
11362 && (header_module_p ()
11363 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl))))
11364 /* GM static function. */
11365 return true;
11366
11367 if (DECL_TEMPLATE_INFO (decl))
11368 {
11369 int use_tpl = DECL_USE_TEMPLATE (decl);
11370
11371 // FIXME: Partial specializations have definitions too.
11372 if (use_tpl < 2)
11373 return true;
11374 }
11375 break;
11376
11377 case TYPE_DECL:
11378 {
11379 tree type = TREE_TYPE (decl);
11380 if (type == TYPE_MAIN_VARIANT (type)
11381 && decl == TYPE_NAME (type)
11382 && (TREE_CODE (type) == ENUMERAL_TYPE
11383 ? TYPE_VALUES (type) : TYPE_FIELDS (type)))
11384 return true;
11385 }
11386 break;
11387
11388 case VAR_DECL:
11389 if (DECL_LANG_SPECIFIC (decl)
11390 && DECL_TEMPLATE_INFO (decl)
11391 && DECL_USE_TEMPLATE (decl) < 2)
11392 return DECL_INITIAL (decl);
11393 else
11394 {
11395 if (!DECL_INITIALIZED_P (decl))
11396 return false;
11397
11398 if (header_module_p ()
11399 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl)))
11400 /* GM static variable. */
11401 return true;
11402
11403 if (!TREE_CONSTANT (decl))
11404 return false;
11405
11406 return true;
11407 }
11408 break;
11409
11410 case CONCEPT_DECL:
11411 if (DECL_INITIAL (decl))
11412 return true;
11413
11414 break;
11415 }
11416
11417 return false;
11418 }
11419
11420 uintptr_t *
11421 trees_in::find_duplicate (tree existing)
11422 {
11423 if (!duplicates)
11424 return NULL;
11425
11426 return duplicates->get (existing);
11427 }
11428
11429 /* We're starting to read a duplicate DECL. EXISTING is the already
11430 known node. */
11431
11432 void
11433 trees_in::register_duplicate (tree decl, tree existing)
11434 {
11435 if (!duplicates)
11436 duplicates = new duplicate_hash_map (40);
11437
11438 bool existed;
11439 uintptr_t &slot = duplicates->get_or_insert (existing, &existed);
11440 gcc_checking_assert (!existed);
11441 slot = reinterpret_cast<uintptr_t> (decl);
11442 }
11443
11444 /* We've read a definition of MAYBE_EXISTING. If not a duplicate,
11445 return MAYBE_EXISTING (into which the definition should be
11446 installed). Otherwise return NULL if already known bad, or the
11447 duplicate we read (for ODR checking, or extracting addtional merge
11448 information). */
11449
11450 tree
11451 trees_in::odr_duplicate (tree maybe_existing, bool has_defn)
11452 {
11453 tree res = NULL_TREE;
11454
11455 if (uintptr_t *dup = find_duplicate (maybe_existing))
11456 {
11457 if (!(*dup & 1))
11458 res = reinterpret_cast<tree> (*dup);
11459 }
11460 else
11461 res = maybe_existing;
11462
11463 assert_definition (maybe_existing, res && !has_defn);
11464
11465 // FIXME: We probably need to return the template, so that the
11466 // template header can be checked?
11467 return res ? STRIP_TEMPLATE (res) : NULL_TREE;
11468 }
11469
11470 /* The following writer functions rely on the current behaviour of
11471 depset::hash::add_dependency making the decl and defn depset nodes
11472 depend on eachother. That way we don't have to worry about seeding
11473 the tree map with named decls that cannot be looked up by name (I.e
11474 template and function parms). We know the decl and definition will
11475 be in the same cluster, which is what we want. */
11476
11477 void
11478 trees_out::write_function_def (tree decl)
11479 {
11480 tree_node (DECL_RESULT (decl));
11481 tree_node (DECL_INITIAL (decl));
11482 tree_node (DECL_SAVED_TREE (decl));
11483 tree_node (DECL_FRIEND_CONTEXT (decl));
11484
11485 constexpr_fundef *cexpr = retrieve_constexpr_fundef (decl);
11486 int tag = 0;
11487 if (cexpr)
11488 {
11489 if (cexpr->result == error_mark_node)
11490 /* We'll stream the RESULT_DECL naturally during the
11491 serialization. We never need to fish it back again, so
11492 that's ok. */
11493 tag = 0;
11494 else
11495 tag = insert (cexpr->result);
11496 }
11497 if (streaming_p ())
11498 {
11499 i (tag);
11500 if (tag)
11501 dump (dumper::TREE)
11502 && dump ("Constexpr:%d result %N", tag, cexpr->result);
11503 }
11504 if (tag)
11505 {
11506 unsigned ix = 0;
11507 for (tree parm = cexpr->parms; parm; parm = DECL_CHAIN (parm), ix++)
11508 {
11509 tag = insert (parm);
11510 if (streaming_p ())
11511 dump (dumper::TREE)
11512 && dump ("Constexpr:%d parm:%u %N", tag, ix, parm);
11513 }
11514 tree_node (cexpr->body);
11515 }
11516
11517 if (streaming_p ())
11518 {
11519 unsigned flags = 0;
11520
11521 if (DECL_NOT_REALLY_EXTERN (decl))
11522 flags |= 1;
11523
11524 u (flags);
11525 }
11526 }
11527
11528 void
11529 trees_out::mark_function_def (tree)
11530 {
11531 }
11532
11533 bool
11534 trees_in::read_function_def (tree decl, tree maybe_template)
11535 {
11536 dump () && dump ("Reading function definition %N", decl);
11537 tree result = tree_node ();
11538 tree initial = tree_node ();
11539 tree saved = tree_node ();
11540 tree context = tree_node ();
11541 constexpr_fundef cexpr;
11542
11543 tree maybe_dup = odr_duplicate (maybe_template, DECL_SAVED_TREE (decl));
11544 bool installing = maybe_dup && !DECL_SAVED_TREE (decl);
11545
11546 if (maybe_dup)
11547 for (auto parm = DECL_ARGUMENTS (maybe_dup); parm; parm = DECL_CHAIN (parm))
11548 DECL_CONTEXT (parm) = decl;
11549
11550 if (int wtag = i ())
11551 {
11552 int tag = 1;
11553 cexpr.result = error_mark_node;
11554
11555 cexpr.result = copy_decl (result);
11556 tag = insert (cexpr.result);
11557
11558 if (wtag != tag)
11559 set_overrun ();
11560 dump (dumper::TREE)
11561 && dump ("Constexpr:%d result %N", tag, cexpr.result);
11562
11563 cexpr.parms = NULL_TREE;
11564 tree *chain = &cexpr.parms;
11565 unsigned ix = 0;
11566 for (tree parm = DECL_ARGUMENTS (maybe_dup ? maybe_dup : decl);
11567 parm; parm = DECL_CHAIN (parm), ix++)
11568 {
11569 tree p = copy_decl (parm);
11570 tag = insert (p);
11571 dump (dumper::TREE)
11572 && dump ("Constexpr:%d parm:%u %N", tag, ix, p);
11573 *chain = p;
11574 chain = &DECL_CHAIN (p);
11575 }
11576 cexpr.body = tree_node ();
11577 cexpr.decl = decl;
11578 }
11579 else
11580 cexpr.decl = NULL_TREE;
11581
11582 unsigned flags = u ();
11583
11584 if (get_overrun ())
11585 return NULL_TREE;
11586
11587 if (installing)
11588 {
11589 DECL_NOT_REALLY_EXTERN (decl) = flags & 1;
11590 DECL_RESULT (decl) = result;
11591 DECL_INITIAL (decl) = initial;
11592 DECL_SAVED_TREE (decl) = saved;
11593 if (maybe_dup)
11594 DECL_ARGUMENTS (decl) = DECL_ARGUMENTS (maybe_dup);
11595
11596 if (context)
11597 SET_DECL_FRIEND_CONTEXT (decl, context);
11598 if (cexpr.decl)
11599 register_constexpr_fundef (cexpr);
11600 post_process (maybe_template);
11601 }
11602 else if (maybe_dup)
11603 {
11604 // FIXME:QOI Check matching defn
11605 }
11606
11607 return true;
11608 }
11609
11610 /* Also for CONCEPT_DECLs. */
11611
11612 void
11613 trees_out::write_var_def (tree decl)
11614 {
11615 tree init = DECL_INITIAL (decl);
11616 tree_node (init);
11617 if (!init)
11618 {
11619 tree dyn_init = NULL_TREE;
11620
11621 if (DECL_NONTRIVIALLY_INITIALIZED_P (decl))
11622 {
11623 dyn_init = value_member (decl,
11624 CP_DECL_THREAD_LOCAL_P (decl)
11625 ? tls_aggregates : static_aggregates);
11626 gcc_checking_assert (dyn_init);
11627 /* Mark it so write_inits knows this is needed. */
11628 TREE_LANG_FLAG_0 (dyn_init) = true;
11629 dyn_init = TREE_PURPOSE (dyn_init);
11630 }
11631 tree_node (dyn_init);
11632 }
11633 }
11634
11635 void
11636 trees_out::mark_var_def (tree)
11637 {
11638 }
11639
11640 bool
11641 trees_in::read_var_def (tree decl, tree maybe_template)
11642 {
11643 /* Do not mark the virtual table entries as used. */
11644 bool vtable = TREE_CODE (decl) == VAR_DECL && DECL_VTABLE_OR_VTT_P (decl);
11645 unused += vtable;
11646 tree init = tree_node ();
11647 tree dyn_init = init ? NULL_TREE : tree_node ();
11648 unused -= vtable;
11649
11650 if (get_overrun ())
11651 return false;
11652
11653 bool initialized = (VAR_P (decl) ? bool (DECL_INITIALIZED_P (decl))
11654 : bool (DECL_INITIAL (decl)));
11655 tree maybe_dup = odr_duplicate (maybe_template, initialized);
11656 bool installing = maybe_dup && !initialized;
11657 if (installing)
11658 {
11659 if (DECL_EXTERNAL (decl))
11660 DECL_NOT_REALLY_EXTERN (decl) = true;
11661 if (VAR_P (decl))
11662 DECL_INITIALIZED_P (decl) = true;
11663 DECL_INITIAL (decl) = init;
11664 if (!dyn_init)
11665 ;
11666 else if (CP_DECL_THREAD_LOCAL_P (decl))
11667 tls_aggregates = tree_cons (dyn_init, decl, tls_aggregates);
11668 else
11669 static_aggregates = tree_cons (dyn_init, decl, static_aggregates);
11670 }
11671 else if (maybe_dup)
11672 {
11673 // FIXME:QOI Check matching defn
11674 }
11675
11676 return true;
11677 }
11678
11679 /* If MEMBER doesn't have an independent life outside the class,
11680 return it (or it's TEMPLATE_DECL). Otherwise NULL. */
11681
11682 static tree
11683 member_owned_by_class (tree member)
11684 {
11685 gcc_assert (DECL_P (member));
11686
11687 /* Clones are owned by their origin. */
11688 if (DECL_CLONED_FUNCTION_P (member))
11689 return NULL;
11690
11691 if (TREE_CODE (member) == FIELD_DECL)
11692 /* FIELD_DECLS can have template info in some cases. We always
11693 want the FIELD_DECL though, as there's never a TEMPLATE_DECL
11694 wrapping them. */
11695 return member;
11696
11697 int use_tpl = -1;
11698 if (tree ti = node_template_info (member, use_tpl))
11699 {
11700 // FIXME: Don't bail on things that CANNOT have their own
11701 // template header. No, make sure they're in the same cluster.
11702 if (use_tpl > 0)
11703 return NULL_TREE;
11704
11705 if (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == member)
11706 member = TI_TEMPLATE (ti);
11707 }
11708 return member;
11709 }
11710
11711 void
11712 trees_out::write_class_def (tree defn)
11713 {
11714 gcc_assert (DECL_P (defn));
11715 if (streaming_p ())
11716 dump () && dump ("Writing class definition %N", defn);
11717
11718 tree type = TREE_TYPE (defn);
11719 tree_node (TYPE_SIZE (type));
11720 tree_node (TYPE_SIZE_UNIT (type));
11721 tree_node (TYPE_VFIELD (type));
11722 tree_node (TYPE_BINFO (type));
11723
11724 vec_chained_decls (TYPE_FIELDS (type));
11725
11726 /* Every class but __as_base has a type-specific. */
11727 gcc_checking_assert (!TYPE_LANG_SPECIFIC (type) == IS_FAKE_BASE_TYPE (type));
11728
11729 if (TYPE_LANG_SPECIFIC (type))
11730 {
11731 {
11732 vec<tree, va_gc> *v = CLASSTYPE_MEMBER_VEC (type);
11733 if (!v)
11734 {
11735 gcc_checking_assert (!streaming_p ());
11736 /* Force a class vector. */
11737 v = set_class_bindings (type, -1);
11738 gcc_checking_assert (v);
11739 }
11740
11741 unsigned len = v->length ();
11742 if (streaming_p ())
11743 u (len);
11744 for (unsigned ix = 0; ix != len; ix++)
11745 {
11746 tree m = (*v)[ix];
11747 if (TREE_CODE (m) == TYPE_DECL
11748 && DECL_ARTIFICIAL (m)
11749 && TYPE_STUB_DECL (TREE_TYPE (m)) == m)
11750 /* This is a using-decl for a type, or an anonymous
11751 struct (maybe with a typedef name). Write the type. */
11752 m = TREE_TYPE (m);
11753 tree_node (m);
11754 }
11755 }
11756 tree_node (CLASSTYPE_LAMBDA_EXPR (type));
11757
11758 /* TYPE_CONTAINS_VPTR_P looks at the vbase vector, which the
11759 reader won't know at this point. */
11760 int has_vptr = TYPE_CONTAINS_VPTR_P (type);
11761
11762 if (streaming_p ())
11763 {
11764 unsigned nvbases = vec_safe_length (CLASSTYPE_VBASECLASSES (type));
11765 u (nvbases);
11766 i (has_vptr);
11767 }
11768
11769 if (has_vptr)
11770 {
11771 tree_vec (CLASSTYPE_PURE_VIRTUALS (type));
11772 tree_pair_vec (CLASSTYPE_VCALL_INDICES (type));
11773 tree_node (CLASSTYPE_KEY_METHOD (type));
11774 }
11775 }
11776
11777 if (TYPE_LANG_SPECIFIC (type))
11778 {
11779 tree_node (CLASSTYPE_PRIMARY_BINFO (type));
11780
11781 tree as_base = CLASSTYPE_AS_BASE (type);
11782 if (as_base)
11783 as_base = TYPE_NAME (as_base);
11784 tree_node (as_base);
11785
11786 /* Write the vtables. */
11787 tree vtables = CLASSTYPE_VTABLES (type);
11788 vec_chained_decls (vtables);
11789 for (; vtables; vtables = TREE_CHAIN (vtables))
11790 write_definition (vtables);
11791
11792 /* Write the friend classes. */
11793 tree_list (CLASSTYPE_FRIEND_CLASSES (type), false);
11794
11795 /* Write the friend functions. */
11796 for (tree friends = DECL_FRIENDLIST (defn);
11797 friends; friends = TREE_CHAIN (friends))
11798 {
11799 /* Name of these friends. */
11800 tree_node (TREE_PURPOSE (friends));
11801 tree_list (TREE_VALUE (friends), false);
11802 }
11803 /* End of friend fns. */
11804 tree_node (NULL_TREE);
11805
11806 /* Write the decl list. */
11807 tree_list (CLASSTYPE_DECL_LIST (type), true);
11808
11809 if (TYPE_CONTAINS_VPTR_P (type))
11810 {
11811 /* Write the thunks. */
11812 for (tree decls = TYPE_FIELDS (type);
11813 decls; decls = DECL_CHAIN (decls))
11814 if (TREE_CODE (decls) == FUNCTION_DECL
11815 && DECL_VIRTUAL_P (decls)
11816 && DECL_THUNKS (decls))
11817 {
11818 tree_node (decls);
11819 /* Thunks are always unique, so chaining is ok. */
11820 chained_decls (DECL_THUNKS (decls));
11821 }
11822 tree_node (NULL_TREE);
11823 }
11824 }
11825 }
11826
11827 void
11828 trees_out::mark_class_member (tree member, bool do_defn)
11829 {
11830 gcc_assert (DECL_P (member));
11831
11832 member = member_owned_by_class (member);
11833 if (member)
11834 mark_declaration (member, do_defn && has_definition (member));
11835 }
11836
11837 void
11838 trees_out::mark_class_def (tree defn)
11839 {
11840 gcc_assert (DECL_P (defn));
11841 tree type = TREE_TYPE (defn);
11842 /* Mark the class members that are not type-decls and cannot have
11843 independent definitions. */
11844 for (tree member = TYPE_FIELDS (type); member; member = DECL_CHAIN (member))
11845 if (TREE_CODE (member) == FIELD_DECL
11846 || TREE_CODE (member) == USING_DECL
11847 /* A cloned enum-decl from 'using enum unrelated;' */
11848 || (TREE_CODE (member) == CONST_DECL
11849 && DECL_CONTEXT (member) == type))
11850 {
11851 mark_class_member (member);
11852 if (TREE_CODE (member) == FIELD_DECL)
11853 if (tree repr = DECL_BIT_FIELD_REPRESENTATIVE (member))
11854 mark_declaration (repr, false);
11855 }
11856
11857 /* Mark the binfo hierarchy. */
11858 for (tree child = TYPE_BINFO (type); child; child = TREE_CHAIN (child))
11859 mark_by_value (child);
11860
11861 if (TYPE_LANG_SPECIFIC (type))
11862 {
11863 for (tree vtable = CLASSTYPE_VTABLES (type);
11864 vtable; vtable = TREE_CHAIN (vtable))
11865 mark_declaration (vtable, true);
11866
11867 if (TYPE_CONTAINS_VPTR_P (type))
11868 /* Mark the thunks, they belong to the class definition,
11869 /not/ the thunked-to function. */
11870 for (tree decls = TYPE_FIELDS (type);
11871 decls; decls = DECL_CHAIN (decls))
11872 if (TREE_CODE (decls) == FUNCTION_DECL)
11873 for (tree thunks = DECL_THUNKS (decls);
11874 thunks; thunks = DECL_CHAIN (thunks))
11875 mark_declaration (thunks, false);
11876 }
11877 }
11878
11879 /* Nop sorting, needed for resorting the member vec. */
11880
11881 static void
11882 nop (void *, void *)
11883 {
11884 }
11885
11886 bool
11887 trees_in::read_class_def (tree defn, tree maybe_template)
11888 {
11889 gcc_assert (DECL_P (defn));
11890 dump () && dump ("Reading class definition %N", defn);
11891 tree type = TREE_TYPE (defn);
11892 tree size = tree_node ();
11893 tree size_unit = tree_node ();
11894 tree vfield = tree_node ();
11895 tree binfo = tree_node ();
11896 vec<tree, va_gc> *vbase_vec = NULL;
11897 vec<tree, va_gc> *member_vec = NULL;
11898 vec<tree, va_gc> *pure_virts = NULL;
11899 vec<tree_pair_s, va_gc> *vcall_indices = NULL;
11900 tree key_method = NULL_TREE;
11901 tree lambda = NULL_TREE;
11902
11903 /* Read the fields. */
11904 vec<tree, va_heap> *fields = vec_chained_decls ();
11905
11906 if (TYPE_LANG_SPECIFIC (type))
11907 {
11908 if (unsigned len = u ())
11909 {
11910 vec_alloc (member_vec, len);
11911 for (unsigned ix = 0; ix != len; ix++)
11912 {
11913 tree m = tree_node ();
11914 if (get_overrun ())
11915 break;
11916 if (TYPE_P (m))
11917 m = TYPE_STUB_DECL (m);
11918 member_vec->quick_push (m);
11919 }
11920 }
11921 lambda = tree_node ();
11922
11923 if (!get_overrun ())
11924 {
11925 unsigned nvbases = u ();
11926 if (nvbases)
11927 {
11928 vec_alloc (vbase_vec, nvbases);
11929 for (tree child = binfo; child; child = TREE_CHAIN (child))
11930 if (BINFO_VIRTUAL_P (child))
11931 vbase_vec->quick_push (child);
11932 }
11933 }
11934
11935 if (!get_overrun ())
11936 {
11937 int has_vptr = i ();
11938 if (has_vptr)
11939 {
11940 pure_virts = tree_vec ();
11941 vcall_indices = tree_pair_vec ();
11942 key_method = tree_node ();
11943 }
11944 }
11945 }
11946
11947 tree maybe_dup = odr_duplicate (maybe_template, TYPE_SIZE (type));
11948 bool installing = maybe_dup && !TYPE_SIZE (type);
11949 if (installing)
11950 {
11951 if (DECL_EXTERNAL (defn) && TYPE_LANG_SPECIFIC (type))
11952 {
11953 /* We don't deal with not-really-extern, because, for a
11954 module you want the import to be the interface, and for a
11955 header-unit, you're doing it wrong. */
11956 CLASSTYPE_INTERFACE_UNKNOWN (type) = false;
11957 CLASSTYPE_INTERFACE_ONLY (type) = true;
11958 }
11959
11960 if (maybe_dup != defn)
11961 {
11962 // FIXME: This is needed on other defns too, almost
11963 // duplicate-decl like? See is_matching_decl too.
11964 /* Copy flags from the duplicate. */
11965 tree type_dup = TREE_TYPE (maybe_dup);
11966
11967 /* Core pieces. */
11968 TYPE_MODE_RAW (type) = TYPE_MODE_RAW (type_dup);
11969 SET_DECL_MODE (defn, DECL_MODE (maybe_dup));
11970 TREE_ADDRESSABLE (type) = TREE_ADDRESSABLE (type_dup);
11971 DECL_SIZE (defn) = DECL_SIZE (maybe_dup);
11972 DECL_SIZE_UNIT (defn) = DECL_SIZE_UNIT (maybe_dup);
11973 DECL_ALIGN_RAW (defn) = DECL_ALIGN_RAW (maybe_dup);
11974 DECL_WARN_IF_NOT_ALIGN_RAW (defn)
11975 = DECL_WARN_IF_NOT_ALIGN_RAW (maybe_dup);
11976 DECL_USER_ALIGN (defn) = DECL_USER_ALIGN (maybe_dup);
11977
11978 /* C++ pieces. */
11979 TYPE_POLYMORPHIC_P (type) = TYPE_POLYMORPHIC_P (type_dup);
11980 TYPE_HAS_USER_CONSTRUCTOR (type)
11981 = TYPE_HAS_USER_CONSTRUCTOR (type_dup);
11982 TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type)
11983 = TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type_dup);
11984
11985 if (auto ls = TYPE_LANG_SPECIFIC (type_dup))
11986 {
11987 if (TYPE_LANG_SPECIFIC (type))
11988 {
11989 CLASSTYPE_BEFRIENDING_CLASSES (type_dup)
11990 = CLASSTYPE_BEFRIENDING_CLASSES (type);
11991 CLASSTYPE_TYPEINFO_VAR (type_dup)
11992 = CLASSTYPE_TYPEINFO_VAR (type);
11993 }
11994 for (tree v = type; v; v = TYPE_NEXT_VARIANT (v))
11995 TYPE_LANG_SPECIFIC (v) = ls;
11996 }
11997 }
11998
11999 TYPE_SIZE (type) = size;
12000 TYPE_SIZE_UNIT (type) = size_unit;
12001
12002 if (fields)
12003 {
12004 tree *chain = &TYPE_FIELDS (type);
12005 unsigned len = fields->length ();
12006 for (unsigned ix = 0; ix != len; ix++)
12007 {
12008 tree decl = (*fields)[ix];
12009
12010 if (!decl)
12011 {
12012 /* An anonymous struct with typedef name. */
12013 tree tdef = (*fields)[ix+1];
12014 decl = TYPE_STUB_DECL (TREE_TYPE (tdef));
12015 gcc_checking_assert (IDENTIFIER_ANON_P (DECL_NAME (decl))
12016 && decl != tdef);
12017 }
12018
12019 gcc_checking_assert (!*chain == !DECL_CLONED_FUNCTION_P (decl));
12020 *chain = decl;
12021 chain = &DECL_CHAIN (decl);
12022
12023 if (TREE_CODE (decl) == USING_DECL
12024 && TREE_CODE (USING_DECL_SCOPE (decl)) == RECORD_TYPE)
12025 {
12026 /* Reconstruct DECL_ACCESS. */
12027 tree decls = USING_DECL_DECLS (decl);
12028 tree access = declared_access (decl);
12029
12030 for (ovl_iterator iter (decls); iter; ++iter)
12031 {
12032 tree d = *iter;
12033
12034 retrofit_lang_decl (d);
12035 tree list = DECL_ACCESS (d);
12036
12037 if (!purpose_member (type, list))
12038 DECL_ACCESS (d) = tree_cons (type, access, list);
12039 }
12040 }
12041 }
12042 }
12043
12044 TYPE_VFIELD (type) = vfield;
12045 TYPE_BINFO (type) = binfo;
12046
12047 if (TYPE_LANG_SPECIFIC (type))
12048 {
12049 CLASSTYPE_LAMBDA_EXPR (type) = lambda;
12050
12051 CLASSTYPE_MEMBER_VEC (type) = member_vec;
12052 CLASSTYPE_PURE_VIRTUALS (type) = pure_virts;
12053 CLASSTYPE_VCALL_INDICES (type) = vcall_indices;
12054
12055 CLASSTYPE_KEY_METHOD (type) = key_method;
12056
12057 CLASSTYPE_VBASECLASSES (type) = vbase_vec;
12058
12059 /* Resort the member vector. */
12060 resort_type_member_vec (member_vec, NULL, nop, NULL);
12061 }
12062 }
12063 else if (maybe_dup)
12064 {
12065 // FIXME:QOI Check matching defn
12066 }
12067
12068 if (TYPE_LANG_SPECIFIC (type))
12069 {
12070 tree primary = tree_node ();
12071 tree as_base = tree_node ();
12072
12073 if (as_base)
12074 as_base = TREE_TYPE (as_base);
12075
12076 /* Read the vtables. */
12077 vec<tree, va_heap> *vtables = vec_chained_decls ();
12078 if (vtables)
12079 {
12080 unsigned len = vtables->length ();
12081 for (unsigned ix = 0; ix != len; ix++)
12082 {
12083 tree vtable = (*vtables)[ix];
12084 read_var_def (vtable, vtable);
12085 }
12086 }
12087
12088 tree friend_classes = tree_list (false);
12089 tree friend_functions = NULL_TREE;
12090 for (tree *chain = &friend_functions;
12091 tree name = tree_node (); chain = &TREE_CHAIN (*chain))
12092 {
12093 tree val = tree_list (false);
12094 *chain = build_tree_list (name, val);
12095 }
12096 tree decl_list = tree_list (true);
12097
12098 if (installing)
12099 {
12100 CLASSTYPE_PRIMARY_BINFO (type) = primary;
12101 CLASSTYPE_AS_BASE (type) = as_base;
12102
12103 if (vtables)
12104 {
12105 if (!CLASSTYPE_KEY_METHOD (type)
12106 /* Sneaky user may have defined it inline
12107 out-of-class. */
12108 || DECL_DECLARED_INLINE_P (CLASSTYPE_KEY_METHOD (type)))
12109 vec_safe_push (keyed_classes, type);
12110 unsigned len = vtables->length ();
12111 tree *chain = &CLASSTYPE_VTABLES (type);
12112 for (unsigned ix = 0; ix != len; ix++)
12113 {
12114 tree vtable = (*vtables)[ix];
12115 gcc_checking_assert (!*chain);
12116 *chain = vtable;
12117 chain = &DECL_CHAIN (vtable);
12118 }
12119 }
12120 CLASSTYPE_FRIEND_CLASSES (type) = friend_classes;
12121 DECL_FRIENDLIST (defn) = friend_functions;
12122 CLASSTYPE_DECL_LIST (type) = decl_list;
12123
12124 for (; friend_classes; friend_classes = TREE_CHAIN (friend_classes))
12125 {
12126 tree f = TREE_VALUE (friend_classes);
12127
12128 if (TYPE_P (f))
12129 {
12130 CLASSTYPE_BEFRIENDING_CLASSES (f)
12131 = tree_cons (NULL_TREE, type,
12132 CLASSTYPE_BEFRIENDING_CLASSES (f));
12133 dump () && dump ("Class %N befriending %C:%N",
12134 type, TREE_CODE (f), f);
12135 }
12136 }
12137
12138 for (; friend_functions;
12139 friend_functions = TREE_CHAIN (friend_functions))
12140 for (tree friend_decls = TREE_VALUE (friend_functions);
12141 friend_decls; friend_decls = TREE_CHAIN (friend_decls))
12142 {
12143 tree f = TREE_VALUE (friend_decls);
12144
12145 DECL_BEFRIENDING_CLASSES (f)
12146 = tree_cons (NULL_TREE, type, DECL_BEFRIENDING_CLASSES (f));
12147 dump () && dump ("Class %N befriending %C:%N",
12148 type, TREE_CODE (f), f);
12149 }
12150 }
12151
12152 if (TYPE_CONTAINS_VPTR_P (type))
12153 /* Read and install the thunks. */
12154 while (tree vfunc = tree_node ())
12155 {
12156 tree thunks = chained_decls ();
12157 if (installing)
12158 SET_DECL_THUNKS (vfunc, thunks);
12159 }
12160
12161 vec_free (vtables);
12162 }
12163
12164 /* Propagate to all variants. */
12165 if (installing)
12166 fixup_type_variants (type);
12167
12168 /* IS_FAKE_BASE_TYPE is inaccurate at this point, because if this is
12169 the fake base, we've not hooked it into the containing class's
12170 data structure yet. Fortunately it has a unique name. */
12171 if (installing
12172 && DECL_NAME (defn) != as_base_identifier
12173 && (!CLASSTYPE_TEMPLATE_INFO (type)
12174 || !uses_template_parms (TI_ARGS (CLASSTYPE_TEMPLATE_INFO (type)))))
12175 /* Emit debug info. It'd be nice to know if the interface TU
12176 already emitted this. */
12177 rest_of_type_compilation (type, !LOCAL_CLASS_P (type));
12178
12179 vec_free (fields);
12180
12181 return !get_overrun ();
12182 }
12183
12184 void
12185 trees_out::write_enum_def (tree decl)
12186 {
12187 tree type = TREE_TYPE (decl);
12188
12189 tree_node (TYPE_VALUES (type));
12190 tree_node (TYPE_MIN_VALUE (type));
12191 tree_node (TYPE_MAX_VALUE (type));
12192 }
12193
12194 void
12195 trees_out::mark_enum_def (tree decl)
12196 {
12197 tree type = TREE_TYPE (decl);
12198
12199 for (tree values = TYPE_VALUES (type); values; values = TREE_CHAIN (values))
12200 {
12201 tree cst = TREE_VALUE (values);
12202 mark_by_value (cst);
12203 /* We must mark the init to avoid circularity in tt_enum_int. */
12204 if (tree init = DECL_INITIAL (cst))
12205 if (TREE_CODE (init) == INTEGER_CST)
12206 mark_by_value (init);
12207 }
12208 }
12209
12210 bool
12211 trees_in::read_enum_def (tree defn, tree maybe_template)
12212 {
12213 tree type = TREE_TYPE (defn);
12214 tree values = tree_node ();
12215 tree min = tree_node ();
12216 tree max = tree_node ();
12217
12218 if (get_overrun ())
12219 return false;
12220
12221 tree maybe_dup = odr_duplicate (maybe_template, TYPE_VALUES (type));
12222 bool installing = maybe_dup && !TYPE_VALUES (type);
12223
12224 if (installing)
12225 {
12226 TYPE_VALUES (type) = values;
12227 TYPE_MIN_VALUE (type) = min;
12228 TYPE_MAX_VALUE (type) = max;
12229
12230 rest_of_type_compilation (type, DECL_NAMESPACE_SCOPE_P (defn));
12231 }
12232 else if (maybe_dup)
12233 {
12234 tree known = TYPE_VALUES (type);
12235 for (; known && values;
12236 known = TREE_CHAIN (known), values = TREE_CHAIN (values))
12237 {
12238 tree known_decl = TREE_VALUE (known);
12239 tree new_decl = TREE_VALUE (values);
12240
12241 if (DECL_NAME (known_decl) != DECL_NAME (new_decl))
12242 goto bad;
12243
12244 new_decl = maybe_duplicate (new_decl);
12245
12246 if (!cp_tree_equal (DECL_INITIAL (known_decl),
12247 DECL_INITIAL (new_decl)))
12248 goto bad;
12249 }
12250
12251 if (known || values)
12252 goto bad;
12253
12254 if (!cp_tree_equal (TYPE_MIN_VALUE (type), min)
12255 || !cp_tree_equal (TYPE_MAX_VALUE (type), max))
12256 {
12257 bad:;
12258 error_at (DECL_SOURCE_LOCATION (maybe_dup),
12259 "definition of %qD does not match", maybe_dup);
12260 inform (DECL_SOURCE_LOCATION (defn),
12261 "existing definition %qD", defn);
12262
12263 tree known_decl = NULL_TREE, new_decl = NULL_TREE;
12264
12265 if (known)
12266 known_decl = TREE_VALUE (known);
12267 if (values)
12268 new_decl = maybe_duplicate (TREE_VALUE (values));
12269
12270 if (known_decl && new_decl)
12271 {
12272 inform (DECL_SOURCE_LOCATION (new_decl),
12273 "... this enumerator %qD", new_decl);
12274 inform (DECL_SOURCE_LOCATION (known_decl),
12275 "enumerator %qD does not match ...", known_decl);
12276 }
12277 else if (known_decl || new_decl)
12278 {
12279 tree extra = known_decl ? known_decl : new_decl;
12280 inform (DECL_SOURCE_LOCATION (extra),
12281 "additional enumerators beginning with %qD", extra);
12282 }
12283 else
12284 inform (DECL_SOURCE_LOCATION (maybe_dup),
12285 "enumeration range differs");
12286
12287 /* Mark it bad. */
12288 unmatched_duplicate (maybe_template);
12289 }
12290 }
12291
12292 return true;
12293 }
12294
12295 /* Write out the body of DECL. See above circularity note. */
12296
12297 void
12298 trees_out::write_definition (tree decl)
12299 {
12300 if (streaming_p ())
12301 {
12302 assert_definition (decl);
12303 dump ()
12304 && dump ("Writing definition %C:%N", TREE_CODE (decl), decl);
12305 }
12306 else
12307 dump (dumper::DEPEND)
12308 && dump ("Depending definition %C:%N", TREE_CODE (decl), decl);
12309
12310 again:
12311 switch (TREE_CODE (decl))
12312 {
12313 default:
12314 gcc_unreachable ();
12315
12316 case TEMPLATE_DECL:
12317 decl = DECL_TEMPLATE_RESULT (decl);
12318 goto again;
12319
12320 case FUNCTION_DECL:
12321 write_function_def (decl);
12322 break;
12323
12324 case TYPE_DECL:
12325 {
12326 tree type = TREE_TYPE (decl);
12327 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12328 && TYPE_NAME (type) == decl);
12329 if (TREE_CODE (type) == ENUMERAL_TYPE)
12330 write_enum_def (decl);
12331 else
12332 write_class_def (decl);
12333 }
12334 break;
12335
12336 case VAR_DECL:
12337 case CONCEPT_DECL:
12338 write_var_def (decl);
12339 break;
12340 }
12341 }
12342
12343 /* Mark a declaration for by-value walking. If DO_DEFN is true, mark
12344 its body too. */
12345
12346 void
12347 trees_out::mark_declaration (tree decl, bool do_defn)
12348 {
12349 mark_by_value (decl);
12350
12351 if (TREE_CODE (decl) == TEMPLATE_DECL)
12352 decl = DECL_TEMPLATE_RESULT (decl);
12353
12354 if (!do_defn)
12355 return;
12356
12357 switch (TREE_CODE (decl))
12358 {
12359 default:
12360 gcc_unreachable ();
12361
12362 case FUNCTION_DECL:
12363 mark_function_def (decl);
12364 break;
12365
12366 case TYPE_DECL:
12367 {
12368 tree type = TREE_TYPE (decl);
12369 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12370 && TYPE_NAME (type) == decl);
12371 if (TREE_CODE (type) == ENUMERAL_TYPE)
12372 mark_enum_def (decl);
12373 else
12374 mark_class_def (decl);
12375 }
12376 break;
12377
12378 case VAR_DECL:
12379 case CONCEPT_DECL:
12380 mark_var_def (decl);
12381 break;
12382 }
12383 }
12384
12385 /* Read in the body of DECL. See above circularity note. */
12386
12387 bool
12388 trees_in::read_definition (tree decl)
12389 {
12390 dump () && dump ("Reading definition %C %N", TREE_CODE (decl), decl);
12391
12392 tree maybe_template = decl;
12393
12394 again:
12395 switch (TREE_CODE (decl))
12396 {
12397 default:
12398 break;
12399
12400 case TEMPLATE_DECL:
12401 decl = DECL_TEMPLATE_RESULT (decl);
12402 goto again;
12403
12404 case FUNCTION_DECL:
12405 return read_function_def (decl, maybe_template);
12406
12407 case TYPE_DECL:
12408 {
12409 tree type = TREE_TYPE (decl);
12410 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12411 && TYPE_NAME (type) == decl);
12412 if (TREE_CODE (type) == ENUMERAL_TYPE)
12413 return read_enum_def (decl, maybe_template);
12414 else
12415 return read_class_def (decl, maybe_template);
12416 }
12417 break;
12418
12419 case VAR_DECL:
12420 case CONCEPT_DECL:
12421 return read_var_def (decl, maybe_template);
12422 }
12423
12424 return false;
12425 }
12426
12427 /* Lookup an maybe insert a slot for depset for KEY. */
12428
12429 depset **
12430 depset::hash::entity_slot (tree entity, bool insert)
12431 {
12432 traits::compare_type key (entity, NULL);
12433 depset **slot = find_slot_with_hash (key, traits::hash (key),
12434 insert ? INSERT : NO_INSERT);
12435
12436 return slot;
12437 }
12438
12439 depset **
12440 depset::hash::binding_slot (tree ctx, tree name, bool insert)
12441 {
12442 traits::compare_type key (ctx, name);
12443 depset **slot = find_slot_with_hash (key, traits::hash (key),
12444 insert ? INSERT : NO_INSERT);
12445
12446 return slot;
12447 }
12448
12449 depset *
12450 depset::hash::find_dependency (tree decl)
12451 {
12452 depset **slot = entity_slot (decl, false);
12453
12454 return slot ? *slot : NULL;
12455 }
12456
12457 depset *
12458 depset::hash::find_binding (tree ctx, tree name)
12459 {
12460 depset **slot = binding_slot (ctx, name, false);
12461
12462 return slot ? *slot : NULL;
12463 }
12464
12465 /* DECL is a newly discovered dependency. Create the depset, if it
12466 doesn't already exist. Add it to the worklist if so.
12467
12468 DECL will be an OVL_USING_P OVERLOAD, if it's from a binding that's
12469 a using decl.
12470
12471 We do not have to worry about adding the same dependency more than
12472 once. First it's harmless, but secondly the TREE_VISITED marking
12473 prevents us wanting to do it anyway. */
12474
12475 depset *
12476 depset::hash::make_dependency (tree decl, entity_kind ek)
12477 {
12478 /* Make sure we're being told consistent information. */
12479 gcc_checking_assert ((ek == EK_NAMESPACE)
12480 == (TREE_CODE (decl) == NAMESPACE_DECL
12481 && !DECL_NAMESPACE_ALIAS (decl)));
12482 gcc_checking_assert (ek != EK_BINDING && ek != EK_REDIRECT);
12483 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
12484 && (TREE_CODE (decl) != USING_DECL
12485 || TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL));
12486 gcc_checking_assert (!is_key_order ());
12487 if (ek == EK_USING)
12488 gcc_checking_assert (TREE_CODE (decl) == OVERLOAD);
12489
12490 if (TREE_CODE (decl) == TEMPLATE_DECL)
12491 {
12492 /* The template should have copied these from its result decl. */
12493 tree res = DECL_TEMPLATE_RESULT (decl);
12494
12495 gcc_checking_assert (DECL_MODULE_EXPORT_P (decl)
12496 == DECL_MODULE_EXPORT_P (res));
12497 if (DECL_LANG_SPECIFIC (res))
12498 {
12499 gcc_checking_assert (DECL_MODULE_PURVIEW_P (decl)
12500 == DECL_MODULE_PURVIEW_P (res));
12501 gcc_checking_assert ((DECL_MODULE_IMPORT_P (decl)
12502 == DECL_MODULE_IMPORT_P (res)));
12503 }
12504 }
12505
12506 depset **slot = entity_slot (decl, true);
12507 depset *dep = *slot;
12508 bool for_binding = ek == EK_FOR_BINDING;
12509
12510 if (!dep)
12511 {
12512 if (DECL_IMPLICIT_TYPEDEF_P (decl)
12513 /* ... not an enum, for instance. */
12514 && RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
12515 && TYPE_LANG_SPECIFIC (TREE_TYPE (decl))
12516 && CLASSTYPE_USE_TEMPLATE (TREE_TYPE (decl)) == 2)
12517 {
12518 /* A partial or explicit specialization. Partial
12519 specializations might not be in the hash table, because
12520 there can be multiple differently-constrained variants.
12521
12522 template<typename T> class silly;
12523 template<typename T> requires true class silly {};
12524
12525 We need to find them, insert their TEMPLATE_DECL in the
12526 dep_hash, and then convert the dep we just found into a
12527 redirect. */
12528
12529 tree ti = TYPE_TEMPLATE_INFO (TREE_TYPE (decl));
12530 tree tmpl = TI_TEMPLATE (ti);
12531 tree partial = NULL_TREE;
12532 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (tmpl);
12533 spec; spec = TREE_CHAIN (spec))
12534 if (DECL_TEMPLATE_RESULT (TREE_VALUE (spec)) == decl)
12535 {
12536 partial = TREE_VALUE (spec);
12537 break;
12538 }
12539
12540 if (partial)
12541 {
12542 /* Eagerly create an empty redirect. The following
12543 make_dependency call could cause hash reallocation,
12544 and invalidate slot's value. */
12545 depset *redirect = make_entity (decl, EK_REDIRECT);
12546
12547 /* Redirects are never reached -- always snap to their target. */
12548 redirect->set_flag_bit<DB_UNREACHED_BIT> ();
12549
12550 *slot = redirect;
12551
12552 depset *tmpl_dep = make_dependency (partial, EK_PARTIAL);
12553 gcc_checking_assert (tmpl_dep->get_entity_kind () == EK_PARTIAL);
12554
12555 redirect->deps.safe_push (tmpl_dep);
12556
12557 return redirect;
12558 }
12559 }
12560
12561 bool has_def = ek != EK_USING && has_definition (decl);
12562 if (ek > EK_BINDING)
12563 ek = EK_DECL;
12564
12565 /* The only OVERLOADS we should see are USING decls from
12566 bindings. */
12567 *slot = dep = make_entity (decl, ek, has_def);
12568
12569 if (TREE_CODE (decl) == TEMPLATE_DECL)
12570 {
12571 if (DECL_ALIAS_TEMPLATE_P (decl) && DECL_TEMPLATE_INFO (decl))
12572 dep->set_flag_bit<DB_ALIAS_TMPL_INST_BIT> ();
12573 else if (CHECKING_P)
12574 /* The template_result should otherwise not be in the
12575 table, or be an empty redirect (created above). */
12576 if (auto *eslot = entity_slot (DECL_TEMPLATE_RESULT (decl), false))
12577 gcc_checking_assert ((*eslot)->get_entity_kind () == EK_REDIRECT
12578 && !(*eslot)->deps.length ());
12579 }
12580
12581 if (ek != EK_USING
12582 && DECL_LANG_SPECIFIC (decl)
12583 && DECL_MODULE_IMPORT_P (decl))
12584 {
12585 /* Store the module number and index in cluster/section, so
12586 we don't have to look them up again. */
12587 unsigned index = import_entity_index (decl);
12588 module_state *from = import_entity_module (index);
12589 /* Remap will be zero for imports from partitions, which we
12590 want to treat as-if declared in this TU. */
12591 if (from->remap)
12592 {
12593 dep->cluster = index - from->entity_lwm;
12594 dep->section = from->remap;
12595 dep->set_flag_bit<DB_IMPORTED_BIT> ();
12596 }
12597 }
12598
12599 if (ek == EK_DECL
12600 && !dep->is_import ()
12601 && TREE_CODE (CP_DECL_CONTEXT (decl)) == NAMESPACE_DECL
12602 && !(TREE_CODE (decl) == TEMPLATE_DECL
12603 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl)))
12604 {
12605 tree ctx = CP_DECL_CONTEXT (decl);
12606 tree not_tmpl = STRIP_TEMPLATE (decl);
12607
12608 if (!TREE_PUBLIC (ctx))
12609 /* Member of internal namespace. */
12610 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
12611 else if (VAR_OR_FUNCTION_DECL_P (not_tmpl)
12612 && DECL_THIS_STATIC (not_tmpl))
12613 {
12614 /* An internal decl. This is ok in a GM entity. */
12615 if (!(header_module_p ()
12616 || !DECL_LANG_SPECIFIC (not_tmpl)
12617 || !DECL_MODULE_PURVIEW_P (not_tmpl)))
12618 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
12619 }
12620
12621 }
12622
12623 if (!dep->is_import ())
12624 worklist.safe_push (dep);
12625 }
12626
12627 dump (dumper::DEPEND)
12628 && dump ("%s on %s %C:%N found",
12629 ek == EK_REDIRECT ? "Redirect"
12630 : for_binding ? "Binding" : "Dependency",
12631 dep->entity_kind_name (), TREE_CODE (decl), decl);
12632
12633 return dep;
12634 }
12635
12636 /* DEP is a newly discovered dependency. Append it to current's
12637 depset. */
12638
12639 void
12640 depset::hash::add_dependency (depset *dep)
12641 {
12642 gcc_checking_assert (current && !is_key_order ());
12643 current->deps.safe_push (dep);
12644
12645 if (dep->is_internal () && !current->is_internal ())
12646 current->set_flag_bit<DB_REFS_INTERNAL_BIT> ();
12647
12648 if (current->get_entity_kind () == EK_USING
12649 && DECL_IMPLICIT_TYPEDEF_P (dep->get_entity ())
12650 && TREE_CODE (TREE_TYPE (dep->get_entity ())) == ENUMERAL_TYPE)
12651 {
12652 /* CURRENT is an unwrapped using-decl and DECL is an enum's
12653 implicit typedef. Is CURRENT a member of the enum? */
12654 tree c_decl = OVL_FUNCTION (current->get_entity ());
12655
12656 if (TREE_CODE (c_decl) == CONST_DECL
12657 && (current->deps[0]->get_entity ()
12658 == CP_DECL_CONTEXT (dep->get_entity ())))
12659 /* Make DECL depend on CURRENT. */
12660 dep->deps.safe_push (current);
12661 }
12662
12663 if (dep->is_unreached ())
12664 {
12665 /* The dependency is reachable now. */
12666 reached_unreached = true;
12667 dep->clear_flag_bit<DB_UNREACHED_BIT> ();
12668 dump (dumper::DEPEND)
12669 && dump ("Reaching unreached %s %C:%N", dep->entity_kind_name (),
12670 TREE_CODE (dep->get_entity ()), dep->get_entity ());
12671 }
12672 }
12673
12674 depset *
12675 depset::hash::add_dependency (tree decl, entity_kind ek)
12676 {
12677 depset *dep;
12678
12679 if (is_key_order ())
12680 {
12681 dep = find_dependency (decl);
12682 if (dep)
12683 {
12684 current->deps.safe_push (dep);
12685 dump (dumper::MERGE)
12686 && dump ("Key dependency on %s %C:%N found",
12687 dep->entity_kind_name (), TREE_CODE (decl), decl);
12688 }
12689 else
12690 {
12691 /* It's not a mergeable decl, look for it in the original
12692 table. */
12693 dep = chain->find_dependency (decl);
12694 gcc_checking_assert (dep);
12695 }
12696 }
12697 else
12698 {
12699 dep = make_dependency (decl, ek);
12700 if (dep->get_entity_kind () != EK_REDIRECT)
12701 add_dependency (dep);
12702 }
12703
12704 return dep;
12705 }
12706
12707 void
12708 depset::hash::add_namespace_context (depset *dep, tree ns)
12709 {
12710 depset *ns_dep = make_dependency (ns, depset::EK_NAMESPACE);
12711 dep->deps.safe_push (ns_dep);
12712
12713 /* Mark it as special if imported so we don't walk connect when
12714 SCCing. */
12715 if (!dep->is_binding () && ns_dep->is_import ())
12716 dep->set_special ();
12717 }
12718
12719 struct add_binding_data
12720 {
12721 tree ns;
12722 bitmap partitions;
12723 depset *binding;
12724 depset::hash *hash;
12725 bool met_namespace;
12726 };
12727
12728 bool
12729 depset::hash::add_binding_entity (tree decl, WMB_Flags flags, void *data_)
12730 {
12731 auto data = static_cast <add_binding_data *> (data_);
12732
12733 if (TREE_CODE (decl) != NAMESPACE_DECL || DECL_NAMESPACE_ALIAS (decl))
12734 {
12735 tree inner = decl;
12736
12737 if (TREE_CODE (inner) == CONST_DECL
12738 && TREE_CODE (DECL_CONTEXT (inner)) == ENUMERAL_TYPE)
12739 inner = TYPE_NAME (DECL_CONTEXT (inner));
12740 else if (TREE_CODE (inner) == TEMPLATE_DECL)
12741 inner = DECL_TEMPLATE_RESULT (inner);
12742
12743 if (!DECL_LANG_SPECIFIC (inner) || !DECL_MODULE_PURVIEW_P (inner))
12744 /* Ignore global module fragment entities. */
12745 return false;
12746
12747 if (VAR_OR_FUNCTION_DECL_P (inner)
12748 && DECL_THIS_STATIC (inner))
12749 {
12750 if (!header_module_p ())
12751 /* Ignore internal-linkage entitites. */
12752 return false;
12753 }
12754
12755 if ((TREE_CODE (decl) == VAR_DECL
12756 || TREE_CODE (decl) == TYPE_DECL)
12757 && DECL_TINFO_P (decl))
12758 /* Ignore TINFO things. */
12759 return false;
12760
12761 if (!(flags & WMB_Using) && CP_DECL_CONTEXT (decl) != data->ns)
12762 {
12763 /* A using that lost its wrapper or an unscoped enum
12764 constant. */
12765 flags = WMB_Flags (flags | WMB_Using);
12766 if (DECL_MODULE_EXPORT_P (TREE_CODE (decl) == CONST_DECL
12767 ? TYPE_NAME (TREE_TYPE (decl))
12768 : STRIP_TEMPLATE (decl)))
12769 flags = WMB_Flags (flags | WMB_Export);
12770 }
12771
12772 if (!data->binding)
12773 /* No binding to check. */;
12774 else if (flags & WMB_Using)
12775 {
12776 /* Look in the binding to see if we already have this
12777 using. */
12778 for (unsigned ix = data->binding->deps.length (); --ix;)
12779 {
12780 depset *d = data->binding->deps[ix];
12781 if (d->get_entity_kind () == EK_USING
12782 && OVL_FUNCTION (d->get_entity ()) == decl)
12783 {
12784 if (!(flags & WMB_Hidden))
12785 d->clear_hidden_binding ();
12786 if (flags & WMB_Export)
12787 OVL_EXPORT_P (d->get_entity ()) = true;
12788 return false;
12789 }
12790 }
12791 }
12792 else if (flags & WMB_Dups)
12793 {
12794 /* Look in the binding to see if we already have this decl. */
12795 for (unsigned ix = data->binding->deps.length (); --ix;)
12796 {
12797 depset *d = data->binding->deps[ix];
12798 if (d->get_entity () == decl)
12799 {
12800 if (!(flags & WMB_Hidden))
12801 d->clear_hidden_binding ();
12802 return false;
12803 }
12804 }
12805 }
12806
12807 /* We're adding something. */
12808 if (!data->binding)
12809 {
12810 data->binding = make_binding (data->ns, DECL_NAME (decl));
12811 data->hash->add_namespace_context (data->binding, data->ns);
12812
12813 depset **slot = data->hash->binding_slot (data->ns,
12814 DECL_NAME (decl), true);
12815 gcc_checking_assert (!*slot);
12816 *slot = data->binding;
12817 }
12818
12819 if (flags & WMB_Using)
12820 {
12821 decl = ovl_make (decl, NULL_TREE);
12822 if (flags & WMB_Export)
12823 OVL_EXPORT_P (decl) = true;
12824 }
12825
12826 depset *dep = data->hash->make_dependency
12827 (decl, flags & WMB_Using ? EK_USING : EK_FOR_BINDING);
12828 if (flags & WMB_Hidden)
12829 dep->set_hidden_binding ();
12830 data->binding->deps.safe_push (dep);
12831 /* Binding and contents are mutually dependent. */
12832 dep->deps.safe_push (data->binding);
12833
12834 return true;
12835 }
12836 else if (DECL_NAME (decl) && !data->met_namespace)
12837 {
12838 /* Namespace, walk exactly once. */
12839 gcc_checking_assert (TREE_PUBLIC (decl));
12840 data->met_namespace = true;
12841 if (data->hash->add_namespace_entities (decl, data->partitions)
12842 || DECL_MODULE_EXPORT_P (decl))
12843 {
12844 data->hash->make_dependency (decl, depset::EK_NAMESPACE);
12845 return true;
12846 }
12847 }
12848
12849 return false;
12850 }
12851
12852 /* Recursively find all the namespace bindings of NS.
12853 Add a depset for every binding that contains an export or
12854 module-linkage entity. Add a defining depset for every such decl
12855 that we need to write a definition. Such defining depsets depend
12856 on the binding depset. Returns true if we contain something
12857 explicitly exported. */
12858
12859 bool
12860 depset::hash::add_namespace_entities (tree ns, bitmap partitions)
12861 {
12862 dump () && dump ("Looking for writables in %N", ns);
12863 dump.indent ();
12864
12865 unsigned count = 0;
12866 add_binding_data data;
12867 data.ns = ns;
12868 data.partitions = partitions;
12869 data.hash = this;
12870
12871 hash_table<named_decl_hash>::iterator end
12872 (DECL_NAMESPACE_BINDINGS (ns)->end ());
12873 for (hash_table<named_decl_hash>::iterator iter
12874 (DECL_NAMESPACE_BINDINGS (ns)->begin ()); iter != end; ++iter)
12875 {
12876 data.binding = nullptr;
12877 data.met_namespace = false;
12878 if (walk_module_binding (*iter, partitions, add_binding_entity, &data))
12879 count++;
12880 }
12881
12882 if (count)
12883 dump () && dump ("Found %u entries", count);
12884 dump.outdent ();
12885
12886 return count != 0;
12887 }
12888
12889 void
12890 depset::hash::add_partial_entities (vec<tree, va_gc> *partial_classes)
12891 {
12892 for (unsigned ix = 0; ix != partial_classes->length (); ix++)
12893 {
12894 tree inner = (*partial_classes)[ix];
12895
12896 depset *dep = make_dependency (inner, depset::EK_DECL);
12897
12898 if (dep->get_entity_kind () == depset::EK_REDIRECT)
12899 /* We should have recorded the template as a partial
12900 specialization. */
12901 gcc_checking_assert (dep->deps[0]->get_entity_kind ()
12902 == depset::EK_PARTIAL);
12903 else
12904 /* It was an explicit specialization, not a partial one. */
12905 gcc_checking_assert (dep->get_entity_kind ()
12906 == depset::EK_SPECIALIZATION);
12907 }
12908 }
12909
12910 /* Add the members of imported classes that we defined in this TU.
12911 This will also include lazily created implicit member function
12912 declarations. (All others will be definitions.) */
12913
12914 void
12915 depset::hash::add_class_entities (vec<tree, va_gc> *class_members)
12916 {
12917 for (unsigned ix = 0; ix != class_members->length (); ix++)
12918 {
12919 tree defn = (*class_members)[ix];
12920 depset *dep = make_dependency (defn, EK_INNER_DECL);
12921
12922 if (dep->get_entity_kind () == EK_REDIRECT)
12923 dep = dep->deps[0];
12924
12925 /* Only non-instantiations need marking as members. */
12926 if (dep->get_entity_kind () == EK_DECL)
12927 dep->set_flag_bit <DB_IS_MEMBER_BIT> ();
12928 }
12929 }
12930
12931 /* We add the partial & explicit specializations, and the explicit
12932 instantiations. */
12933
12934 static void
12935 specialization_add (bool decl_p, spec_entry *entry, void *data_)
12936 {
12937 vec<spec_entry *> *data = reinterpret_cast <vec<spec_entry *> *> (data_);
12938
12939 if (!decl_p)
12940 {
12941 /* We exclusively use decls to locate things. Make sure there's
12942 no mismatch between the two specialization tables we keep.
12943 pt.c optimizes instantiation lookup using a complicated
12944 heuristic. We don't attempt to replicate that algorithm, but
12945 observe its behaviour and reproduce it upon read back. */
12946
12947 gcc_checking_assert (DECL_ALIAS_TEMPLATE_P (entry->tmpl)
12948 || TREE_CODE (entry->spec) == ENUMERAL_TYPE
12949 || DECL_CLASS_TEMPLATE_P (entry->tmpl));
12950
12951 /* Only alias templates can appear in both tables (and
12952 if they're in the type table they must also be in the decl table). */
12953 gcc_checking_assert
12954 (!match_mergeable_specialization (true, entry, false)
12955 == (decl_p || !DECL_ALIAS_TEMPLATE_P (entry->tmpl)));
12956 }
12957 else if (VAR_OR_FUNCTION_DECL_P (entry->spec))
12958 gcc_checking_assert (!DECL_LOCAL_DECL_P (entry->spec));
12959
12960 data->safe_push (entry);
12961 }
12962
12963 /* Arbitrary stable comparison. */
12964
12965 static int
12966 specialization_cmp (const void *a_, const void *b_)
12967 {
12968 const spec_entry *ea = *reinterpret_cast<const spec_entry *const *> (a_);
12969 const spec_entry *eb = *reinterpret_cast<const spec_entry *const *> (b_);
12970
12971 if (ea == eb)
12972 return 0;
12973
12974 tree a = ea->spec;
12975 tree b = eb->spec;
12976 if (TYPE_P (a))
12977 {
12978 a = TYPE_NAME (a);
12979 b = TYPE_NAME (b);
12980 }
12981
12982 if (a == b)
12983 /* This can happen with friend specializations. Just order by
12984 entry address. See note in depset_cmp. */
12985 return ea < eb ? -1 : +1;
12986
12987 return DECL_UID (a) < DECL_UID (b) ? -1 : +1;
12988 }
12989
12990 /* We add all kinds of specialializations. Implicit specializations
12991 should only streamed and walked if they are reachable from
12992 elsewhere. Hence the UNREACHED flag. This is making the
12993 assumption that it is cheaper to reinstantiate them on demand
12994 elsewhere, rather than stream them in when we instantiate their
12995 general template. Also, if we do stream them, we can only do that
12996 if they are not internal (which they can become if they themselves
12997 touch an internal entity?). */
12998
12999 void
13000 depset::hash::add_specializations (bool decl_p)
13001 {
13002 vec<spec_entry *> data;
13003 data.create (100);
13004 walk_specializations (decl_p, specialization_add, &data);
13005 data.qsort (specialization_cmp);
13006 while (data.length ())
13007 {
13008 spec_entry *entry = data.pop ();
13009 tree spec = entry->spec;
13010 int use_tpl = 0;
13011 bool is_alias = false;
13012 bool is_friend = false;
13013
13014 if (decl_p && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (entry->tmpl))
13015 /* A friend of a template. This is keyed to the
13016 instantiation. */
13017 is_friend = true;
13018
13019 if (!decl_p && DECL_ALIAS_TEMPLATE_P (entry->tmpl))
13020 {
13021 spec = TYPE_NAME (spec);
13022 is_alias = true;
13023 }
13024
13025 if (decl_p || is_alias)
13026 {
13027 if (tree ti = DECL_TEMPLATE_INFO (spec))
13028 {
13029 tree tmpl = TI_TEMPLATE (ti);
13030
13031 use_tpl = DECL_USE_TEMPLATE (spec);
13032 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13033 {
13034 spec = tmpl;
13035 gcc_checking_assert (DECL_USE_TEMPLATE (spec) == use_tpl);
13036 }
13037 else if (is_friend)
13038 {
13039 if (TI_TEMPLATE (ti) != entry->tmpl
13040 || !template_args_equal (TI_ARGS (ti), entry->tmpl))
13041 goto template_friend;
13042 }
13043 }
13044 else
13045 {
13046 template_friend:;
13047 gcc_checking_assert (is_friend);
13048 /* This is a friend of a template class, but not the one
13049 that generated entry->spec itself (i.e. it's an
13050 equivalent clone). We do not need to record
13051 this. */
13052 continue;
13053 }
13054 }
13055 else
13056 {
13057 if (TREE_CODE (spec) == ENUMERAL_TYPE)
13058 {
13059 tree ctx = DECL_CONTEXT (TYPE_NAME (spec));
13060
13061 if (TYPE_P (ctx))
13062 use_tpl = CLASSTYPE_USE_TEMPLATE (ctx);
13063 else
13064 use_tpl = DECL_USE_TEMPLATE (ctx);
13065 }
13066 else
13067 use_tpl = CLASSTYPE_USE_TEMPLATE (spec);
13068
13069 tree ti = TYPE_TEMPLATE_INFO (spec);
13070 tree tmpl = TI_TEMPLATE (ti);
13071
13072 spec = TYPE_NAME (spec);
13073 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13074 {
13075 spec = tmpl;
13076 use_tpl = DECL_USE_TEMPLATE (spec);
13077 }
13078 }
13079
13080 bool needs_reaching = false;
13081 if (use_tpl == 1)
13082 /* Implicit instantiations only walked if we reach them. */
13083 needs_reaching = true;
13084 else if (!DECL_LANG_SPECIFIC (spec)
13085 || !DECL_MODULE_PURVIEW_P (spec))
13086 /* Likewise, GMF explicit or partial specializations. */
13087 needs_reaching = true;
13088
13089 #if false && CHECKING_P
13090 /* The instantiation isn't always on
13091 DECL_TEMPLATE_INSTANTIATIONS, */
13092 // FIXME: we probably need to remember this information?
13093 /* Verify the specialization is on the
13094 DECL_TEMPLATE_INSTANTIATIONS of the template. */
13095 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (entry->tmpl);
13096 cons; cons = TREE_CHAIN (cons))
13097 if (TREE_VALUE (cons) == entry->spec)
13098 {
13099 gcc_assert (entry->args == TREE_PURPOSE (cons));
13100 goto have_spec;
13101 }
13102 gcc_unreachable ();
13103 have_spec:;
13104 #endif
13105
13106 depset *dep = make_dependency (spec, depset::EK_SPECIALIZATION);
13107 if (dep->is_special ())
13108 {
13109 /* An already located specialization, this must be the TYPE
13110 corresponding to an alias_decl we found in the decl
13111 table. */
13112 spec_entry *other = reinterpret_cast <spec_entry *> (dep->deps[0]);
13113 gcc_checking_assert (!decl_p && is_alias && !dep->is_type_spec ());
13114 gcc_checking_assert (other->tmpl == entry->tmpl
13115 && template_args_equal (other->args, entry->args)
13116 && TREE_TYPE (other->spec) == entry->spec);
13117 dep->set_flag_bit<DB_ALIAS_SPEC_BIT> ();
13118 }
13119 else
13120 {
13121 gcc_checking_assert (decl_p || !is_alias);
13122 if (dep->get_entity_kind () == depset::EK_REDIRECT)
13123 dep = dep->deps[0];
13124 else if (dep->get_entity_kind () == depset::EK_SPECIALIZATION)
13125 {
13126 dep->set_special ();
13127 dep->deps.safe_push (reinterpret_cast<depset *> (entry));
13128 if (!decl_p)
13129 dep->set_flag_bit<DB_TYPE_SPEC_BIT> ();
13130 }
13131
13132 if (needs_reaching)
13133 dep->set_flag_bit<DB_UNREACHED_BIT> ();
13134 if (is_friend)
13135 dep->set_flag_bit<DB_FRIEND_SPEC_BIT> ();
13136 }
13137 }
13138 data.release ();
13139 }
13140
13141 /* Add a depset into the mergeable hash. */
13142
13143 void
13144 depset::hash::add_mergeable (depset *mergeable)
13145 {
13146 gcc_checking_assert (is_key_order ());
13147 entity_kind ek = mergeable->get_entity_kind ();
13148 tree decl = mergeable->get_entity ();
13149 gcc_checking_assert (ek < EK_DIRECT_HWM);
13150
13151 depset **slot = entity_slot (decl, true);
13152 gcc_checking_assert (!*slot);
13153 depset *dep = make_entity (decl, ek);
13154 *slot = dep;
13155
13156 worklist.safe_push (dep);
13157
13158 /* So we can locate the mergeable depset this depset refers to,
13159 mark the first dep. */
13160 dep->set_special ();
13161 dep->deps.safe_push (mergeable);
13162 }
13163
13164 /* Iteratively find dependencies. During the walk we may find more
13165 entries on the same binding that need walking. */
13166
13167 void
13168 depset::hash::find_dependencies (module_state *module)
13169 {
13170 trees_out walker (NULL, module, *this);
13171 vec<depset *> unreached;
13172 unreached.create (worklist.length ());
13173
13174 for (;;)
13175 {
13176 reached_unreached = false;
13177 while (worklist.length ())
13178 {
13179 depset *item = worklist.pop ();
13180
13181 gcc_checking_assert (!item->is_binding ());
13182 if (item->is_unreached ())
13183 unreached.quick_push (item);
13184 else
13185 {
13186 current = item;
13187 tree decl = current->get_entity ();
13188 dump (is_key_order () ? dumper::MERGE : dumper::DEPEND)
13189 && dump ("Dependencies of %s %C:%N",
13190 is_key_order () ? "key-order"
13191 : current->entity_kind_name (), TREE_CODE (decl), decl);
13192 dump.indent ();
13193 walker.begin ();
13194 if (current->get_entity_kind () == EK_USING)
13195 walker.tree_node (OVL_FUNCTION (decl));
13196 else if (TREE_VISITED (decl))
13197 /* A global tree. */;
13198 else if (TREE_CODE (decl) == NAMESPACE_DECL
13199 && !DECL_NAMESPACE_ALIAS (decl))
13200 add_namespace_context (current, CP_DECL_CONTEXT (decl));
13201 else
13202 {
13203 walker.mark_declaration (decl, current->has_defn ());
13204
13205 // FIXME: Perhaps p1815 makes this redundant? Or at
13206 // least simplifies it. Voldemort types are only
13207 // ever emissable when containing (inline) function
13208 // definition is emitted?
13209 /* Turn the Sneakoscope on when depending the decl. */
13210 sneakoscope = true;
13211 walker.decl_value (decl, current);
13212 sneakoscope = false;
13213 if (current->has_defn ())
13214 walker.write_definition (decl);
13215 }
13216 walker.end ();
13217
13218 if (!walker.is_key_order ()
13219 && TREE_CODE (decl) == TEMPLATE_DECL
13220 && !DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
13221 /* Mark all the explicit & partial specializations as
13222 reachable. */
13223 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (decl);
13224 cons; cons = TREE_CHAIN (cons))
13225 {
13226 tree spec = TREE_VALUE (cons);
13227 if (TYPE_P (spec))
13228 spec = TYPE_NAME (spec);
13229 int use_tpl;
13230 node_template_info (spec, use_tpl);
13231 if (use_tpl & 2)
13232 {
13233 depset *spec_dep = find_dependency (spec);
13234 if (spec_dep->get_entity_kind () == EK_REDIRECT)
13235 spec_dep = spec_dep->deps[0];
13236 if (spec_dep->is_unreached ())
13237 {
13238 reached_unreached = true;
13239 spec_dep->clear_flag_bit<DB_UNREACHED_BIT> ();
13240 dump (dumper::DEPEND)
13241 && dump ("Reaching unreached specialization"
13242 " %C:%N", TREE_CODE (spec), spec);
13243 }
13244 }
13245 }
13246
13247 dump.outdent ();
13248 current = NULL;
13249 }
13250 }
13251
13252 if (!reached_unreached)
13253 break;
13254
13255 /* It's possible the we reached the unreached before we
13256 processed it in the above loop, so we'll be doing this an
13257 extra time. However, to avoid that we have to do some
13258 bit shuffling that also involves a scan of the list.
13259 Swings & roundabouts I guess. */
13260 std::swap (worklist, unreached);
13261 }
13262
13263 unreached.release ();
13264 }
13265
13266 /* Compare two entries of a single binding. TYPE_DECL before
13267 non-exported before exported. */
13268
13269 static int
13270 binding_cmp (const void *a_, const void *b_)
13271 {
13272 depset *a = *(depset *const *)a_;
13273 depset *b = *(depset *const *)b_;
13274
13275 tree a_ent = a->get_entity ();
13276 tree b_ent = b->get_entity ();
13277 gcc_checking_assert (a_ent != b_ent
13278 && !a->is_binding ()
13279 && !b->is_binding ());
13280
13281 /* Implicit typedefs come first. */
13282 bool a_implicit = DECL_IMPLICIT_TYPEDEF_P (a_ent);
13283 bool b_implicit = DECL_IMPLICIT_TYPEDEF_P (b_ent);
13284 if (a_implicit || b_implicit)
13285 {
13286 /* A binding with two implicit type decls? That's unpossible! */
13287 gcc_checking_assert (!(a_implicit && b_implicit));
13288 return a_implicit ? -1 : +1; /* Implicit first. */
13289 }
13290
13291 /* Hidden before non-hidden. */
13292 bool a_hidden = a->is_hidden ();
13293 bool b_hidden = b->is_hidden ();
13294 if (a_hidden != b_hidden)
13295 return a_hidden ? -1 : +1;
13296
13297 bool a_using = a->get_entity_kind () == depset::EK_USING;
13298 bool a_export;
13299 if (a_using)
13300 {
13301 a_export = OVL_EXPORT_P (a_ent);
13302 a_ent = OVL_FUNCTION (a_ent);
13303 }
13304 else
13305 a_export = DECL_MODULE_EXPORT_P (TREE_CODE (a_ent) == CONST_DECL
13306 ? TYPE_NAME (TREE_TYPE (a_ent))
13307 : STRIP_TEMPLATE (a_ent));
13308
13309 bool b_using = b->get_entity_kind () == depset::EK_USING;
13310 bool b_export;
13311 if (b_using)
13312 {
13313 b_export = OVL_EXPORT_P (b_ent);
13314 b_ent = OVL_FUNCTION (b_ent);
13315 }
13316 else
13317 b_export = DECL_MODULE_EXPORT_P (TREE_CODE (b_ent) == CONST_DECL
13318 ? TYPE_NAME (TREE_TYPE (b_ent))
13319 : STRIP_TEMPLATE (b_ent));
13320
13321 /* Non-exports before exports. */
13322 if (a_export != b_export)
13323 return a_export ? +1 : -1;
13324
13325 /* At this point we don't care, but want a stable sort. */
13326
13327 if (a_using != b_using)
13328 /* using first. */
13329 return a_using? -1 : +1;
13330
13331 return DECL_UID (a_ent) < DECL_UID (b_ent) ? -1 : +1;
13332 }
13333
13334 /* Sort the bindings, issue errors about bad internal refs. */
13335
13336 bool
13337 depset::hash::finalize_dependencies ()
13338 {
13339 bool ok = true;
13340 depset::hash::iterator end (this->end ());
13341 for (depset::hash::iterator iter (begin ()); iter != end; ++iter)
13342 {
13343 depset *dep = *iter;
13344 if (dep->is_binding ())
13345 {
13346 /* Keep the containing namespace dep first. */
13347 gcc_checking_assert (dep->deps.length () > 1
13348 && (dep->deps[0]->get_entity_kind ()
13349 == EK_NAMESPACE)
13350 && (dep->deps[0]->get_entity ()
13351 == dep->get_entity ()));
13352 if (dep->deps.length () > 2)
13353 gcc_qsort (&dep->deps[1], dep->deps.length () - 1,
13354 sizeof (dep->deps[1]), binding_cmp);
13355 }
13356 else if (dep->refs_internal ())
13357 {
13358 for (unsigned ix = dep->deps.length (); ix--;)
13359 {
13360 depset *rdep = dep->deps[ix];
13361 if (rdep->is_internal ())
13362 {
13363 // FIXME:QOI Better location information? We're
13364 // losing, so it doesn't matter about efficiency
13365 tree decl = dep->get_entity ();
13366 error_at (DECL_SOURCE_LOCATION (decl),
13367 "%q#D references internal linkage entity %q#D",
13368 decl, rdep->get_entity ());
13369 break;
13370 }
13371 }
13372 ok = false;
13373 }
13374 }
13375
13376 return ok;
13377 }
13378
13379 /* Core of TARJAN's algorithm to find Strongly Connected Components
13380 within a graph. See https://en.wikipedia.org/wiki/
13381 Tarjan%27s_strongly_connected_components_algorithm for details.
13382
13383 We use depset::section as lowlink. Completed nodes have
13384 depset::cluster containing the cluster number, with the top
13385 bit set.
13386
13387 A useful property is that the output vector is a reverse
13388 topological sort of the resulting DAG. In our case that means
13389 dependent SCCs are found before their dependers. We make use of
13390 that property. */
13391
13392 void
13393 depset::tarjan::connect (depset *v)
13394 {
13395 gcc_checking_assert (v->is_binding ()
13396 || !(v->is_unreached () || v->is_import ()));
13397
13398 v->cluster = v->section = ++index;
13399 stack.safe_push (v);
13400
13401 /* Walk all our dependencies, ignore a first marked slot */
13402 for (unsigned ix = v->is_special (); ix != v->deps.length (); ix++)
13403 {
13404 depset *dep = v->deps[ix];
13405
13406 if (dep->is_binding () || !dep->is_import ())
13407 {
13408 unsigned lwm = dep->cluster;
13409
13410 if (!dep->cluster)
13411 {
13412 /* A new node. Connect it. */
13413 connect (dep);
13414 lwm = dep->section;
13415 }
13416
13417 if (dep->section && v->section > lwm)
13418 v->section = lwm;
13419 }
13420 }
13421
13422 if (v->section == v->cluster)
13423 {
13424 /* Root of a new SCC. Push all the members onto the result list. */
13425 unsigned num = v->cluster;
13426 depset *p;
13427 do
13428 {
13429 p = stack.pop ();
13430 p->cluster = num;
13431 p->section = 0;
13432 result.quick_push (p);
13433 }
13434 while (p != v);
13435 }
13436 }
13437
13438 /* Compare two depsets. The specific ordering is unimportant, we're
13439 just trying to get consistency. */
13440
13441 static int
13442 depset_cmp (const void *a_, const void *b_)
13443 {
13444 depset *a = *(depset *const *)a_;
13445 depset *b = *(depset *const *)b_;
13446
13447 depset::entity_kind a_kind = a->get_entity_kind ();
13448 depset::entity_kind b_kind = b->get_entity_kind ();
13449
13450 if (a_kind != b_kind)
13451 /* Different entity kinds, order by that. */
13452 return a_kind < b_kind ? -1 : +1;
13453
13454 tree a_decl = a->get_entity ();
13455 tree b_decl = b->get_entity ();
13456 if (a_kind == depset::EK_USING)
13457 {
13458 /* If one is a using, the other must be too. */
13459 a_decl = OVL_FUNCTION (a_decl);
13460 b_decl = OVL_FUNCTION (b_decl);
13461 }
13462
13463 if (a_decl != b_decl)
13464 /* Different entities, order by their UID. */
13465 return DECL_UID (a_decl) < DECL_UID (b_decl) ? -1 : +1;
13466
13467 if (a_kind == depset::EK_BINDING)
13468 {
13469 /* Both are bindings. Order by identifier hash. */
13470 gcc_checking_assert (a->get_name () != b->get_name ());
13471 return (IDENTIFIER_HASH_VALUE (a->get_name ())
13472 < IDENTIFIER_HASH_VALUE (b->get_name ())
13473 ? -1 : +1);
13474 }
13475
13476 /* They are the same decl. This can happen with two using decls
13477 pointing to the same target. The best we can aim for is
13478 consistently telling qsort how to order them. Hopefully we'll
13479 never have to debug a case that depends on this. Oh, who am I
13480 kidding? Good luck. */
13481 gcc_checking_assert (a_kind == depset::EK_USING);
13482
13483 /* Order by depset address. Not the best, but it is something. */
13484 return a < b ? -1 : +1;
13485 }
13486
13487 /* Sort the clusters in SCC such that those that depend on one another
13488 are placed later. */
13489
13490 // FIXME: I am not convinced this is needed and, if needed,
13491 // sufficient. We emit the decls in this order but that emission
13492 // could walk into later decls (from the body of the decl, or default
13493 // arg-like things). Why doesn't that walk do the right thing? And
13494 // if it DTRT why do we need to sort here -- won't things naturally
13495 // work? I think part of the issue is that when we're going to refer
13496 // to an entity by name, and that entity is in the same cluster as us,
13497 // we need to actually walk that entity, if we've not already walked
13498 // it.
13499 static void
13500 sort_cluster (depset::hash *original, depset *scc[], unsigned size)
13501 {
13502 depset::hash table (size, original);
13503
13504 dump.indent ();
13505
13506 /* Place bindings last, usings before that. It's not strictly
13507 necessary, but it does make things neater. Says Mr OCD. */
13508 unsigned bind_lwm = size;
13509 unsigned use_lwm = size;
13510 for (unsigned ix = 0; ix != use_lwm;)
13511 {
13512 depset *dep = scc[ix];
13513 switch (dep->get_entity_kind ())
13514 {
13515 case depset::EK_BINDING:
13516 /* Move to end. No increment. Notice this could be moving
13517 a using decl, which we'll then move again. */
13518 if (--bind_lwm != ix)
13519 {
13520 scc[ix] = scc[bind_lwm];
13521 scc[bind_lwm] = dep;
13522 }
13523 if (use_lwm > bind_lwm)
13524 {
13525 use_lwm--;
13526 break;
13527 }
13528 /* We must have copied a using, so move it too. */
13529 dep = scc[ix];
13530 gcc_checking_assert (dep->get_entity_kind () == depset::EK_USING);
13531 /* FALLTHROUGH */
13532
13533 case depset::EK_USING:
13534 if (--use_lwm != ix)
13535 {
13536 scc[ix] = scc[use_lwm];
13537 scc[use_lwm] = dep;
13538 }
13539 break;
13540
13541 case depset::EK_DECL:
13542 case depset::EK_SPECIALIZATION:
13543 case depset::EK_PARTIAL:
13544 table.add_mergeable (dep);
13545 ix++;
13546 break;
13547
13548 default:
13549 gcc_unreachable ();
13550 }
13551 }
13552
13553 gcc_checking_assert (use_lwm <= bind_lwm);
13554 dump (dumper::MERGE) && dump ("Ordering %u/%u depsets", use_lwm, size);
13555
13556 table.find_dependencies (nullptr);
13557
13558 vec<depset *> order = table.connect ();
13559 gcc_checking_assert (order.length () == use_lwm);
13560
13561 /* Now rewrite entries [0,lwm), in the dependency order we
13562 discovered. Usually each entity is in its own cluster. Rarely,
13563 we can get multi-entity clusters, in which case all but one must
13564 only be reached from within the cluster. This happens for
13565 something like:
13566
13567 template<typename T>
13568 auto Foo (const T &arg) -> TPL<decltype (arg)>;
13569
13570 The instantiation of TPL will be in the specialization table, and
13571 refer to Foo via arg. But we can only get to that specialization
13572 from Foo's declaration, so we only need to treat Foo as mergable
13573 (We'll do structural comparison of TPL<decltype (arg)>).
13574
13575 Finding the single cluster entry dep is very tricky and
13576 expensive. Let's just not do that. It's harmless in this case
13577 anyway. */
13578 unsigned pos = 0;
13579 unsigned cluster = ~0u;
13580 for (unsigned ix = 0; ix != order.length (); ix++)
13581 {
13582 gcc_checking_assert (order[ix]->is_special ());
13583 depset *dep = order[ix]->deps[0];
13584 scc[pos++] = dep;
13585 dump (dumper::MERGE)
13586 && dump ("Mergeable %u is %N%s", ix, dep->get_entity (),
13587 order[ix]->cluster == cluster ? " (tight)" : "");
13588 cluster = order[ix]->cluster;
13589 }
13590
13591 gcc_checking_assert (pos == use_lwm);
13592
13593 order.release ();
13594 dump (dumper::MERGE) && dump ("Ordered %u keys", pos);
13595 dump.outdent ();
13596 }
13597
13598 /* Reduce graph to SCCS clusters. SCCS will be populated with the
13599 depsets in dependency order. Each depset's CLUSTER field contains
13600 its cluster number. Each SCC has a unique cluster number, and are
13601 contiguous in SCCS. Cluster numbers are otherwise arbitrary. */
13602
13603 vec<depset *>
13604 depset::hash::connect ()
13605 {
13606 tarjan connector (size ());
13607 vec<depset *> deps;
13608 deps.create (size ());
13609 iterator end (this->end ());
13610 for (iterator iter (begin ()); iter != end; ++iter)
13611 {
13612 depset *item = *iter;
13613
13614 entity_kind kind = item->get_entity_kind ();
13615 if (kind == EK_BINDING
13616 || !(kind == EK_REDIRECT
13617 || item->is_unreached ()
13618 || item->is_import ()))
13619 deps.quick_push (item);
13620 }
13621
13622 /* Iteration over the hash table is an unspecified ordering. While
13623 that has advantages, it causes 2 problems. Firstly repeatable
13624 builds are tricky. Secondly creating testcases that check
13625 dependencies are correct by making sure a bad ordering would
13626 happen if that was wrong. */
13627 deps.qsort (depset_cmp);
13628
13629 while (deps.length ())
13630 {
13631 depset *v = deps.pop ();
13632 dump (dumper::CLUSTER) &&
13633 (v->is_binding ()
13634 ? dump ("Connecting binding %P", v->get_entity (), v->get_name ())
13635 : dump ("Connecting %s %s %C:%N",
13636 is_key_order () ? "key-order"
13637 : !v->has_defn () ? "declaration" : "definition",
13638 v->entity_kind_name (), TREE_CODE (v->get_entity ()),
13639 v->get_entity ()));
13640 if (!v->cluster)
13641 connector.connect (v);
13642 }
13643
13644 deps.release ();
13645 return connector.result;
13646 }
13647
13648 /* Load the entities referred to by this pendset. */
13649
13650 static bool
13651 pendset_lazy_load (pendset *pendings, bool specializations_p)
13652 {
13653 bool ok = true;
13654
13655 for (unsigned ix = 0; ok && ix != pendings->num; ix++)
13656 {
13657 unsigned index = pendings->values[ix];
13658 if (index & ~(~0u >> 1))
13659 {
13660 /* An indirection. */
13661 if (specializations_p)
13662 index = ~index;
13663 pendset *other = pending_table->get (index, true);
13664 if (!pendset_lazy_load (other, specializations_p))
13665 ok = false;
13666 }
13667 else
13668 {
13669 module_state *module = import_entity_module (index);
13670 binding_slot *slot = &(*entity_ary)[index];
13671 if (!slot->is_lazy ())
13672 dump () && dump ("Specialiation %M[%u] already loaded",
13673 module, index - module->entity_lwm);
13674 else if (!module->lazy_load (index - module->entity_lwm, slot))
13675 ok = false;
13676 }
13677 }
13678
13679 /* We own set, so delete it now. */
13680 delete pendings;
13681
13682 return ok;
13683 }
13684
13685 /* Initialize location spans. */
13686
13687 void
13688 loc_spans::init (const line_maps *lmaps, const line_map_ordinary *map)
13689 {
13690 gcc_checking_assert (!init_p ());
13691 spans = new vec<span> ();
13692 spans->reserve (20);
13693
13694 span interval;
13695 interval.ordinary.first = 0;
13696 interval.macro.second = MAX_LOCATION_T + 1;
13697 interval.ordinary_delta = interval.macro_delta = 0;
13698
13699 /* A span for reserved fixed locs. */
13700 interval.ordinary.second
13701 = MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0));
13702 interval.macro.first = interval.macro.second;
13703 dump (dumper::LOCATION)
13704 && dump ("Fixed span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
13705 interval.ordinary.first, interval.ordinary.second,
13706 interval.macro.first, interval.macro.second);
13707 spans->quick_push (interval);
13708
13709 /* A span for command line & forced headers. */
13710 interval.ordinary.first = interval.ordinary.second;
13711 interval.macro.second = interval.macro.first;
13712 if (map)
13713 {
13714 interval.ordinary.second = map->start_location;
13715 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (lmaps);
13716 }
13717 dump (dumper::LOCATION)
13718 && dump ("Pre span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
13719 interval.ordinary.first, interval.ordinary.second,
13720 interval.macro.first, interval.macro.second);
13721 spans->quick_push (interval);
13722
13723 /* Start an interval for the main file. */
13724 interval.ordinary.first = interval.ordinary.second;
13725 interval.macro.second = interval.macro.first;
13726 dump (dumper::LOCATION)
13727 && dump ("Main span %u ordinary:[%u,*) macro:[*,%u)", spans->length (),
13728 interval.ordinary.first, interval.macro.second);
13729 spans->quick_push (interval);
13730 }
13731
13732 /* Reopen the span, if we want the about-to-be-inserted set of maps to
13733 be propagated in our own location table. I.e. we are the primary
13734 interface and we're importing a partition. */
13735
13736 bool
13737 loc_spans::maybe_propagate (module_state *import,
13738 location_t loc = UNKNOWN_LOCATION)
13739 {
13740 bool opened = (module_interface_p () && !module_partition_p ()
13741 && import->is_partition ());
13742 if (opened)
13743 open (loc);
13744 return opened;
13745 }
13746
13747 /* Open a new linemap interval. The just-created ordinary map is the
13748 first map of the interval. */
13749
13750 void
13751 loc_spans::open (location_t hwm = UNKNOWN_LOCATION)
13752 {
13753 if (hwm == UNKNOWN_LOCATION)
13754 hwm = MAP_START_LOCATION (LINEMAPS_LAST_ORDINARY_MAP (line_table));
13755
13756 span interval;
13757 interval.ordinary.first = interval.ordinary.second = hwm;
13758 interval.macro.first = interval.macro.second
13759 = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
13760 interval.ordinary_delta = interval.macro_delta = 0;
13761 dump (dumper::LOCATION)
13762 && dump ("Opening span %u ordinary:[%u,... macro:...,%u)",
13763 spans->length (), interval.ordinary.first,
13764 interval.macro.second);
13765 spans->safe_push (interval);
13766 }
13767
13768 /* Close out the current linemap interval. The last maps are within
13769 the interval. */
13770
13771 void
13772 loc_spans::close ()
13773 {
13774 span &interval = spans->last ();
13775
13776 interval.ordinary.second
13777 = ((line_table->highest_location + (1 << line_table->default_range_bits))
13778 & ~((1u << line_table->default_range_bits) - 1));
13779 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
13780 dump (dumper::LOCATION)
13781 && dump ("Closing span %u ordinary:[%u,%u) macro:[%u,%u)",
13782 spans->length () - 1,
13783 interval.ordinary.first,interval.ordinary.second,
13784 interval.macro.first, interval.macro.second);
13785 }
13786
13787 /* Given an ordinary location LOC, return the lmap_interval it resides
13788 in. NULL if it is not in an interval. */
13789
13790 const loc_spans::span *
13791 loc_spans::ordinary (location_t loc)
13792 {
13793 unsigned len = spans->length ();
13794 unsigned pos = 0;
13795 while (len)
13796 {
13797 unsigned half = len / 2;
13798 const span &probe = (*spans)[pos + half];
13799 if (loc < probe.ordinary.first)
13800 len = half;
13801 else if (loc < probe.ordinary.second)
13802 return &probe;
13803 else
13804 {
13805 pos += half + 1;
13806 len = len - (half + 1);
13807 }
13808 }
13809 return NULL;
13810 }
13811
13812 /* Likewise, given a macro location LOC, return the lmap interval it
13813 resides in. */
13814
13815 const loc_spans::span *
13816 loc_spans::macro (location_t loc)
13817 {
13818 unsigned len = spans->length ();
13819 unsigned pos = 0;
13820 while (len)
13821 {
13822 unsigned half = len / 2;
13823 const span &probe = (*spans)[pos + half];
13824 if (loc >= probe.macro.second)
13825 len = half;
13826 else if (loc >= probe.macro.first)
13827 return &probe;
13828 else
13829 {
13830 pos += half + 1;
13831 len = len - (half + 1);
13832 }
13833 }
13834 return NULL;
13835 }
13836
13837 /* Return the ordinary location closest to FROM. */
13838
13839 static location_t
13840 ordinary_loc_of (line_maps *lmaps, location_t from)
13841 {
13842 while (!IS_ORDINARY_LOC (from))
13843 {
13844 if (IS_ADHOC_LOC (from))
13845 from = get_location_from_adhoc_loc (lmaps, from);
13846 if (IS_MACRO_LOC (from))
13847 {
13848 /* Find the ordinary location nearest FROM. */
13849 const line_map *map = linemap_lookup (lmaps, from);
13850 const line_map_macro *mac_map = linemap_check_macro (map);
13851 from = MACRO_MAP_EXPANSION_POINT_LOCATION (mac_map);
13852 }
13853 }
13854 return from;
13855 }
13856
13857 static module_state **
13858 get_module_slot (tree name, module_state *parent, bool partition, bool insert)
13859 {
13860 module_state_hash::compare_type ct (name, uintptr_t (parent) | partition);
13861 hashval_t hv = module_state_hash::hash (ct);
13862
13863 return modules_hash->find_slot_with_hash (ct, hv, insert ? INSERT : NO_INSERT);
13864 }
13865
13866 static module_state *
13867 get_primary (module_state *parent)
13868 {
13869 while (parent->is_partition ())
13870 parent = parent->parent;
13871
13872 if (!parent->name)
13873 // Implementation unit has null name
13874 parent = parent->parent;
13875
13876 return parent;
13877 }
13878
13879 /* Find or create module NAME & PARENT in the hash table. */
13880
13881 module_state *
13882 get_module (tree name, module_state *parent, bool partition)
13883 {
13884 if (partition)
13885 {
13886 if (!parent)
13887 parent = get_primary ((*modules)[0]);
13888
13889 if (!parent->is_partition () && !parent->flatname)
13890 parent->set_flatname ();
13891 }
13892
13893 module_state **slot = get_module_slot (name, parent, partition, true);
13894 module_state *state = *slot;
13895 if (!state)
13896 {
13897 state = (new (ggc_alloc<module_state> ())
13898 module_state (name, parent, partition));
13899 *slot = state;
13900 }
13901 return state;
13902 }
13903
13904 /* Process string name PTR into a module_state. */
13905
13906 static module_state *
13907 get_module (const char *ptr)
13908 {
13909 if (ptr[0] == '.' ? IS_DIR_SEPARATOR (ptr[1]) : IS_ABSOLUTE_PATH (ptr))
13910 /* A header name. */
13911 return get_module (build_string (strlen (ptr), ptr));
13912
13913 bool partition = false;
13914 module_state *mod = NULL;
13915
13916 for (const char *probe = ptr;; probe++)
13917 if (!*probe || *probe == '.' || *probe == ':')
13918 {
13919 if (probe == ptr)
13920 return NULL;
13921
13922 mod = get_module (get_identifier_with_length (ptr, probe - ptr),
13923 mod, partition);
13924 ptr = probe;
13925 if (*ptr == ':')
13926 {
13927 if (partition)
13928 return NULL;
13929 partition = true;
13930 }
13931
13932 if (!*ptr++)
13933 break;
13934 }
13935 else if (!(ISALPHA (*probe) || *probe == '_'
13936 || (probe != ptr && ISDIGIT (*probe))))
13937 return NULL;
13938
13939 return mod;
13940 }
13941
13942 /* Create a new mapper connecting to OPTION. */
13943
13944 module_client *
13945 make_mapper (location_t loc)
13946 {
13947 timevar_start (TV_MODULE_MAPPER);
13948 const char *option = module_mapper_name;
13949 if (!option)
13950 option = getenv ("CXX_MODULE_MAPPER");
13951
13952 mapper = module_client::open_module_client
13953 (loc, option, &set_cmi_repo,
13954 (save_decoded_options[0].opt_index == OPT_SPECIAL_program_name)
13955 && save_decoded_options[0].arg != progname
13956 ? save_decoded_options[0].arg : nullptr);
13957
13958 timevar_stop (TV_MODULE_MAPPER);
13959
13960 return mapper;
13961 }
13962
13963 /* If THIS is the current purview, issue an import error and return false. */
13964
13965 bool
13966 module_state::check_not_purview (location_t from)
13967 {
13968 module_state *imp = (*modules)[0];
13969 if (imp && !imp->name)
13970 imp = imp->parent;
13971 if (imp == this)
13972 {
13973 /* Cannot import the current module. */
13974 error_at (from, "cannot import module in its own purview");
13975 inform (loc, "module %qs declared here", get_flatname ());
13976 return false;
13977 }
13978 return true;
13979 }
13980
13981 /* Module name substitutions. */
13982 static vec<module_state *,va_heap> substs;
13983
13984 void
13985 module_state::mangle (bool include_partition)
13986 {
13987 if (subst)
13988 mangle_module_substitution (subst - 1);
13989 else
13990 {
13991 if (parent)
13992 parent->mangle (include_partition);
13993 if (include_partition || !is_partition ())
13994 {
13995 char p = 0;
13996 // Partitions are significant for global initializer functions
13997 if (is_partition () && !parent->is_partition ())
13998 p = 'P';
13999 substs.safe_push (this);
14000 subst = substs.length ();
14001 mangle_identifier (p, name);
14002 }
14003 }
14004 }
14005
14006 void
14007 mangle_module (int mod, bool include_partition)
14008 {
14009 module_state *imp = (*modules)[mod];
14010
14011 if (!imp->name)
14012 /* Set when importing the primary module interface. */
14013 imp = imp->parent;
14014
14015 imp->mangle (include_partition);
14016 }
14017
14018 /* Clean up substitutions. */
14019 void
14020 mangle_module_fini ()
14021 {
14022 while (substs.length ())
14023 substs.pop ()->subst = 0;
14024 }
14025
14026 /* Announce WHAT about the module. */
14027
14028 void
14029 module_state::announce (const char *what) const
14030 {
14031 if (noisy_p ())
14032 {
14033 fprintf (stderr, " %s:%s", what, get_flatname ());
14034 fflush (stderr);
14035 }
14036 }
14037
14038 /* A human-readable README section. The contents of this section to
14039 not contribute to the CRC, so the contents can change per
14040 compilation. That allows us to embed CWD, hostname, build time and
14041 what not. It is a STRTAB that may be extracted with:
14042 readelf -pgnu.c++.README $(module).gcm */
14043
14044 void
14045 module_state::write_readme (elf_out *to, cpp_reader *reader,
14046 const char *dialect, unsigned extensions)
14047 {
14048 bytes_out readme (to);
14049
14050 readme.begin (false);
14051
14052 readme.printf ("GNU C++ %smodule%s%s",
14053 is_header () ? "header " : is_partition () ? "" : "primary ",
14054 is_header () ? ""
14055 : is_interface () ? " interface" : " implementation",
14056 is_partition () ? " partition" : "");
14057
14058 /* Compiler's version. */
14059 readme.printf ("compiler: %s", version_string);
14060
14061 /* Module format version. */
14062 verstr_t string;
14063 version2string (MODULE_VERSION, string);
14064 readme.printf ("version: %s", string);
14065
14066 /* Module information. */
14067 readme.printf ("module: %s", get_flatname ());
14068 readme.printf ("source: %s", main_input_filename);
14069 readme.printf ("dialect: %s", dialect);
14070 if (extensions)
14071 readme.printf ("extensions: %s",
14072 extensions & SE_OPENMP ? "-fopenmp" : "");
14073
14074 /* The following fields could be expected to change between
14075 otherwise identical compilations. Consider a distributed build
14076 system. We should have a way of overriding that. */
14077 if (char *cwd = getcwd (NULL, 0))
14078 {
14079 readme.printf ("cwd: %s", cwd);
14080 free (cwd);
14081 }
14082 readme.printf ("repository: %s", cmi_repo ? cmi_repo : ".");
14083 #if NETWORKING
14084 {
14085 char hostname[64];
14086 if (!gethostname (hostname, sizeof (hostname)))
14087 readme.printf ("host: %s", hostname);
14088 }
14089 #endif
14090 {
14091 /* This of course will change! */
14092 time_t stampy;
14093 auto kind = cpp_get_date (reader, &stampy);
14094 if (kind != CPP_time_kind::UNKNOWN)
14095 {
14096 struct tm *time;
14097
14098 time = gmtime (&stampy);
14099 readme.print_time ("build", time, "UTC");
14100
14101 if (kind == CPP_time_kind::DYNAMIC)
14102 {
14103 time = localtime (&stampy);
14104 readme.print_time ("local", time,
14105 #if defined (__USE_MISC) || defined (__USE_BSD) /* Is there a better way? */
14106 time->tm_zone
14107 #else
14108 ""
14109 #endif
14110 );
14111 }
14112 }
14113 }
14114
14115 /* Its direct imports. */
14116 for (unsigned ix = 1; ix < modules->length (); ix++)
14117 {
14118 module_state *state = (*modules)[ix];
14119
14120 if (state->is_direct ())
14121 readme.printf ("%s: %s %s", state->exported_p ? "export" : "import",
14122 state->get_flatname (), state->filename);
14123 }
14124
14125 readme.end (to, to->name (MOD_SNAME_PFX ".README"), NULL);
14126 }
14127
14128 /* Sort environment var names in reverse order. */
14129
14130 static int
14131 env_var_cmp (const void *a_, const void *b_)
14132 {
14133 const unsigned char *a = *(const unsigned char *const *)a_;
14134 const unsigned char *b = *(const unsigned char *const *)b_;
14135
14136 for (unsigned ix = 0; ; ix++)
14137 {
14138 bool a_end = !a[ix] || a[ix] == '=';
14139 if (a[ix] == b[ix])
14140 {
14141 if (a_end)
14142 break;
14143 }
14144 else
14145 {
14146 bool b_end = !b[ix] || b[ix] == '=';
14147
14148 if (!a_end && !b_end)
14149 return a[ix] < b[ix] ? +1 : -1;
14150 if (a_end && b_end)
14151 break;
14152 return a_end ? +1 : -1;
14153 }
14154 }
14155
14156 return 0;
14157 }
14158
14159 /* Write the environment. It is a STRTAB that may be extracted with:
14160 readelf -pgnu.c++.ENV $(module).gcm */
14161
14162 void
14163 module_state::write_env (elf_out *to)
14164 {
14165 vec<const char *> vars;
14166 vars.create (20);
14167
14168 extern char **environ;
14169 while (const char *var = environ[vars.length ()])
14170 vars.safe_push (var);
14171 vars.qsort (env_var_cmp);
14172
14173 bytes_out env (to);
14174 env.begin (false);
14175 while (vars.length ())
14176 env.printf ("%s", vars.pop ());
14177 env.end (to, to->name (MOD_SNAME_PFX ".ENV"), NULL);
14178
14179 vars.release ();
14180 }
14181
14182 /* Write the direct or indirect imports.
14183 u:N
14184 {
14185 u:index
14186 s:name
14187 u32:crc
14188 s:filename (direct)
14189 u:exported (direct)
14190 } imports[N]
14191 */
14192
14193 void
14194 module_state::write_imports (bytes_out &sec, bool direct)
14195 {
14196 unsigned count = 0;
14197
14198 for (unsigned ix = 1; ix < modules->length (); ix++)
14199 {
14200 module_state *imp = (*modules)[ix];
14201
14202 if (imp->remap && imp->is_direct () == direct)
14203 count++;
14204 }
14205
14206 gcc_assert (!direct || count);
14207
14208 sec.u (count);
14209 for (unsigned ix = 1; ix < modules->length (); ix++)
14210 {
14211 module_state *imp = (*modules)[ix];
14212
14213 if (imp->remap && imp->is_direct () == direct)
14214 {
14215 dump () && dump ("Writing %simport:%u->%u %M (crc=%x)",
14216 !direct ? "indirect "
14217 : imp->exported_p ? "exported " : "",
14218 ix, imp->remap, imp, imp->crc);
14219 sec.u (imp->remap);
14220 sec.str (imp->get_flatname ());
14221 sec.u32 (imp->crc);
14222 if (direct)
14223 {
14224 write_location (sec, imp->imported_from ());
14225 sec.str (imp->filename);
14226 int exportedness = 0;
14227 if (imp->exported_p)
14228 exportedness = +1;
14229 else if (!imp->is_purview_direct ())
14230 exportedness = -1;
14231 sec.i (exportedness);
14232 }
14233 }
14234 }
14235 }
14236
14237 /* READER, LMAPS != NULL == direct imports,
14238 == NUL == indirect imports. */
14239
14240 unsigned
14241 module_state::read_imports (bytes_in &sec, cpp_reader *reader, line_maps *lmaps)
14242 {
14243 unsigned count = sec.u ();
14244 unsigned loaded = 0;
14245
14246 while (count--)
14247 {
14248 unsigned ix = sec.u ();
14249 if (ix >= slurp->remap->length () || !ix || (*slurp->remap)[ix])
14250 {
14251 sec.set_overrun ();
14252 break;
14253 }
14254
14255 const char *name = sec.str (NULL);
14256 module_state *imp = get_module (name);
14257 unsigned crc = sec.u32 ();
14258 int exportedness = 0;
14259
14260 /* If the import is a partition, it must be the same primary
14261 module as this TU. */
14262 if (imp && imp->is_partition () &&
14263 (!named_module_p ()
14264 || (get_primary ((*modules)[0]) != get_primary (imp))))
14265 imp = NULL;
14266
14267 if (!imp)
14268 sec.set_overrun ();
14269 if (sec.get_overrun ())
14270 break;
14271
14272 if (lmaps)
14273 {
14274 /* A direct import, maybe load it. */
14275 location_t floc = read_location (sec);
14276 const char *fname = sec.str (NULL);
14277 exportedness = sec.i ();
14278
14279 if (sec.get_overrun ())
14280 break;
14281
14282 if (!imp->check_not_purview (loc))
14283 continue;
14284
14285 if (imp->loadedness == ML_NONE)
14286 {
14287 imp->loc = floc;
14288 imp->crc = crc;
14289 if (!imp->get_flatname ())
14290 imp->set_flatname ();
14291
14292 unsigned n = dump.push (imp);
14293
14294 if (!imp->filename && fname)
14295 imp->filename = xstrdup (fname);
14296
14297 if (imp->is_partition ())
14298 dump () && dump ("Importing elided partition %M", imp);
14299
14300 if (!imp->do_import (reader, false))
14301 imp = NULL;
14302 dump.pop (n);
14303 if (!imp)
14304 continue;
14305 }
14306
14307 if (is_partition ())
14308 {
14309 if (!imp->is_direct ())
14310 imp->directness = MD_PARTITION_DIRECT;
14311 if (exportedness > 0)
14312 imp->exported_p = true;
14313 }
14314 }
14315 else
14316 {
14317 /* An indirect import, find it, it should already be here. */
14318 if (imp->loadedness == ML_NONE)
14319 {
14320 error_at (loc, "indirect import %qs is not already loaded", name);
14321 continue;
14322 }
14323 }
14324
14325 if (imp->crc != crc)
14326 error_at (loc, "import %qs has CRC mismatch", imp->get_flatname ());
14327
14328 (*slurp->remap)[ix] = (imp->mod << 1) | (lmaps != NULL);
14329
14330 if (lmaps && exportedness >= 0)
14331 set_import (imp, bool (exportedness));
14332 dump () && dump ("Found %simport:%u %M->%u", !lmaps ? "indirect "
14333 : exportedness > 0 ? "exported "
14334 : exportedness < 0 ? "gmf" : "", ix, imp,
14335 imp->mod);
14336 loaded++;
14337 }
14338
14339 return loaded;
14340 }
14341
14342 /* Write the import table to MOD_SNAME_PFX.imp. */
14343
14344 void
14345 module_state::write_imports (elf_out *to, unsigned *crc_ptr)
14346 {
14347 dump () && dump ("Writing imports");
14348 dump.indent ();
14349
14350 bytes_out sec (to);
14351 sec.begin ();
14352
14353 write_imports (sec, true);
14354 write_imports (sec, false);
14355
14356 sec.end (to, to->name (MOD_SNAME_PFX ".imp"), crc_ptr);
14357 dump.outdent ();
14358 }
14359
14360 bool
14361 module_state::read_imports (cpp_reader *reader, line_maps *lmaps)
14362 {
14363 bytes_in sec;
14364
14365 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".imp"))
14366 return false;
14367
14368 dump () && dump ("Reading %u imports", slurp->remap->length () - 1);
14369 dump.indent ();
14370
14371 /* Read the imports. */
14372 unsigned direct = read_imports (sec, reader, lmaps);
14373 unsigned indirect = read_imports (sec, NULL, NULL);
14374 if (direct + indirect + 1 != slurp->remap->length ())
14375 from ()->set_error (elf::E_BAD_IMPORT);
14376
14377 dump.outdent ();
14378 if (!sec.end (from ()))
14379 return false;
14380 return true;
14381 }
14382
14383 /* We're the primary module interface, but have partitions. Document
14384 them so that non-partition module implementation units know which
14385 have already been loaded. */
14386
14387 void
14388 module_state::write_partitions (elf_out *to, unsigned count, unsigned *crc_ptr)
14389 {
14390 dump () && dump ("Writing %u elided partitions", count);
14391 dump.indent ();
14392
14393 bytes_out sec (to);
14394 sec.begin ();
14395
14396 for (unsigned ix = 1; ix != modules->length (); ix++)
14397 {
14398 module_state *imp = (*modules)[ix];
14399 if (imp->is_partition ())
14400 {
14401 dump () && dump ("Writing elided partition %M (crc=%x)",
14402 imp, imp->crc);
14403 sec.str (imp->get_flatname ());
14404 sec.u32 (imp->crc);
14405 write_location (sec, imp->is_direct ()
14406 ? imp->imported_from () : UNKNOWN_LOCATION);
14407 sec.str (imp->filename);
14408 }
14409 }
14410
14411 sec.end (to, to->name (MOD_SNAME_PFX ".prt"), crc_ptr);
14412 dump.outdent ();
14413 }
14414
14415 bool
14416 module_state::read_partitions (unsigned count)
14417 {
14418 bytes_in sec;
14419 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".prt"))
14420 return false;
14421
14422 dump () && dump ("Reading %u elided partitions", count);
14423 dump.indent ();
14424
14425 while (count--)
14426 {
14427 const char *name = sec.str (NULL);
14428 unsigned crc = sec.u32 ();
14429 location_t floc = read_location (sec);
14430 const char *fname = sec.str (NULL);
14431
14432 if (sec.get_overrun ())
14433 break;
14434
14435 dump () && dump ("Reading elided partition %s (crc=%x)", name, crc);
14436
14437 module_state *imp = get_module (name);
14438 if (!imp || !imp->is_partition () || imp->is_rooted ()
14439 || get_primary (imp) != this)
14440 {
14441 sec.set_overrun ();
14442 break;
14443 }
14444
14445 /* Attach the partition without loading it. We'll have to load
14446 for real if it's indirectly imported. */
14447 imp->loc = floc;
14448 imp->crc = crc;
14449 if (!imp->filename && fname[0])
14450 imp->filename = xstrdup (fname);
14451 }
14452
14453 dump.outdent ();
14454 if (!sec.end (from ()))
14455 return false;
14456 return true;
14457 }
14458
14459 /* Counter indices. */
14460 enum module_state_counts
14461 {
14462 MSC_sec_lwm,
14463 MSC_sec_hwm,
14464 MSC_pendings,
14465 MSC_entities,
14466 MSC_namespaces,
14467 MSC_bindings,
14468 MSC_macros,
14469 MSC_inits,
14470 MSC_HWM
14471 };
14472
14473 /* Data for config reading and writing. */
14474 struct module_state_config {
14475 const char *dialect_str;
14476 unsigned num_imports;
14477 unsigned num_partitions;
14478 unsigned ordinary_locs;
14479 unsigned macro_locs;
14480 unsigned ordinary_loc_align;
14481
14482 public:
14483 module_state_config ()
14484 :dialect_str (get_dialect ()),
14485 num_imports (0), num_partitions (0),
14486 ordinary_locs (0), macro_locs (0), ordinary_loc_align (0)
14487 {
14488 }
14489
14490 static void release ()
14491 {
14492 XDELETEVEC (dialect);
14493 dialect = NULL;
14494 }
14495
14496 private:
14497 static const char *get_dialect ();
14498 static char *dialect;
14499 };
14500
14501 char *module_state_config::dialect;
14502
14503 /* Generate a string of the significant compilation options.
14504 Generally assume the user knows what they're doing, in the same way
14505 that object files can be mixed. */
14506
14507 const char *
14508 module_state_config::get_dialect ()
14509 {
14510 if (!dialect)
14511 dialect = concat (get_cxx_dialect_name (cxx_dialect),
14512 /* C++ implies these, only show if disabled. */
14513 flag_exceptions ? "" : "/no-exceptions",
14514 flag_rtti ? "" : "/no-rtti",
14515 flag_new_inheriting_ctors ? "" : "/old-inheriting-ctors",
14516 /* C++ 20 implies concepts. */
14517 cxx_dialect < cxx20 && flag_concepts ? "/concepts" : "",
14518 flag_coroutines ? "/coroutines" : "",
14519 flag_module_implicit_inline ? "/implicit-inline" : "",
14520 NULL);
14521
14522 return dialect;
14523 }
14524
14525 /* Contents of a cluster. */
14526 enum cluster_tag {
14527 ct_decl, /* A decl. */
14528 ct_defn, /* A definition. */
14529 ct_bind, /* A binding. */
14530 ct_hwm
14531 };
14532
14533 /* Binding modifiers. */
14534 enum ct_bind_flags
14535 {
14536 cbf_export = 0x1, /* An exported decl. */
14537 cbf_hidden = 0x2, /* A hidden (friend) decl. */
14538 cbf_using = 0x4, /* A using decl. */
14539 cbf_wrapped = 0x8, /* ... that is wrapped. */
14540 };
14541
14542 /* Write the cluster of depsets in SCC[0-SIZE). */
14543
14544 unsigned
14545 module_state::write_cluster (elf_out *to, depset *scc[], unsigned size,
14546 depset::hash &table, unsigned *counts,
14547 unsigned *crc_ptr)
14548 {
14549 dump () && dump ("Writing section:%u %u depsets", table.section, size);
14550 dump.indent ();
14551
14552 trees_out sec (to, this, table, table.section);
14553 sec.begin ();
14554
14555 /* Determine entity numbers, mark for writing. */
14556 dump (dumper::CLUSTER) && dump ("Cluster members:") && (dump.indent (), true);
14557 for (unsigned ix = 0; ix != size; ix++)
14558 {
14559 depset *b = scc[ix];
14560
14561 switch (b->get_entity_kind ())
14562 {
14563 default:
14564 gcc_unreachable ();
14565
14566 case depset::EK_BINDING:
14567 dump (dumper::CLUSTER)
14568 && dump ("[%u]=%s %P", ix, b->entity_kind_name (),
14569 b->get_entity (), b->get_name ());
14570 for (unsigned jx = b->deps.length (); jx--;)
14571 {
14572 depset *dep = b->deps[jx];
14573 if (jx)
14574 gcc_checking_assert (dep->get_entity_kind () == depset::EK_USING
14575 || TREE_VISITED (dep->get_entity ()));
14576 else
14577 gcc_checking_assert (dep->get_entity_kind ()
14578 == depset::EK_NAMESPACE
14579 && dep->get_entity () == b->get_entity ());
14580 }
14581 break;
14582
14583 case depset::EK_DECL:
14584 if (b->is_member ())
14585 {
14586 case depset::EK_SPECIALIZATION: /* Yowzer! */
14587 case depset::EK_PARTIAL: /* Hey, let's do it again! */
14588 counts[MSC_pendings]++;
14589 }
14590 b->cluster = counts[MSC_entities]++;
14591 sec.mark_declaration (b->get_entity (), b->has_defn ());
14592 /* FALLTHROUGH */
14593
14594 case depset::EK_USING:
14595 gcc_checking_assert (!b->is_import ()
14596 && !b->is_unreached ());
14597 dump (dumper::CLUSTER)
14598 && dump ("[%u]=%s %s %N", ix, b->entity_kind_name (),
14599 b->has_defn () ? "definition" : "declaration",
14600 b->get_entity ());
14601 break;
14602 }
14603 }
14604 dump (dumper::CLUSTER) && (dump.outdent (), true);
14605
14606 /* Ensure every imported decl is referenced before we start
14607 streaming. This ensures that we never encounter the
14608 situation where this cluster instantiates some implicit
14609 member that importing some other decl causes to be
14610 instantiated. */
14611 sec.set_importing (+1);
14612 for (unsigned ix = 0; ix != size; ix++)
14613 {
14614 depset *b = scc[ix];
14615 for (unsigned jx = (b->get_entity_kind () == depset::EK_BINDING
14616 || b->is_special ()) ? 1 : 0;
14617 jx != b->deps.length (); jx++)
14618 {
14619 depset *dep = b->deps[jx];
14620
14621 if (!dep->is_binding ()
14622 && dep->is_import () && !TREE_VISITED (dep->get_entity ()))
14623 {
14624 tree import = dep->get_entity ();
14625
14626 sec.tree_node (import);
14627 dump (dumper::CLUSTER) && dump ("Seeded import %N", import);
14628 }
14629 }
14630 }
14631 sec.tree_node (NULL_TREE);
14632 /* We're done importing now. */
14633 sec.set_importing (-1);
14634
14635 /* Write non-definitions. */
14636 for (unsigned ix = 0; ix != size; ix++)
14637 {
14638 depset *b = scc[ix];
14639 tree decl = b->get_entity ();
14640 switch (b->get_entity_kind ())
14641 {
14642 default:
14643 gcc_unreachable ();
14644 break;
14645
14646 case depset::EK_BINDING:
14647 {
14648 gcc_assert (TREE_CODE (decl) == NAMESPACE_DECL);
14649 dump () && dump ("Depset:%u binding %C:%P", ix, TREE_CODE (decl),
14650 decl, b->get_name ());
14651 sec.u (ct_bind);
14652 sec.tree_node (decl);
14653 sec.tree_node (b->get_name ());
14654
14655 /* Write in reverse order, so reading will see the exports
14656 first, thus building the overload chain will be
14657 optimized. */
14658 for (unsigned jx = b->deps.length (); --jx;)
14659 {
14660 depset *dep = b->deps[jx];
14661 tree bound = dep->get_entity ();
14662 unsigned flags = 0;
14663 if (dep->get_entity_kind () == depset::EK_USING)
14664 {
14665 tree ovl = bound;
14666 bound = OVL_FUNCTION (bound);
14667 if (!(TREE_CODE (bound) == CONST_DECL
14668 && UNSCOPED_ENUM_P (TREE_TYPE (bound))
14669 && decl == TYPE_NAME (TREE_TYPE (bound))))
14670 {
14671 /* An unscope enumerator in its enumeration's
14672 scope is not a using. */
14673 flags |= cbf_using;
14674 if (OVL_USING_P (ovl))
14675 flags |= cbf_wrapped;
14676 }
14677 if (OVL_EXPORT_P (ovl))
14678 flags |= cbf_export;
14679 }
14680 else
14681 {
14682 /* An implicit typedef must be at one. */
14683 gcc_assert (!DECL_IMPLICIT_TYPEDEF_P (bound) || jx == 1);
14684 if (dep->is_hidden ())
14685 flags |= cbf_hidden;
14686 else if (DECL_MODULE_EXPORT_P (STRIP_TEMPLATE (bound)))
14687 flags |= cbf_export;
14688 }
14689
14690 gcc_checking_assert (DECL_P (bound));
14691
14692 sec.i (flags);
14693 sec.tree_node (bound);
14694 }
14695
14696 /* Terminate the list. */
14697 sec.i (-1);
14698 }
14699 break;
14700
14701 case depset::EK_USING:
14702 dump () && dump ("Depset:%u %s %C:%N", ix, b->entity_kind_name (),
14703 TREE_CODE (decl), decl);
14704 break;
14705
14706 case depset::EK_SPECIALIZATION:
14707 case depset::EK_PARTIAL:
14708 case depset::EK_DECL:
14709 dump () && dump ("Depset:%u %s entity:%u %C:%N", ix,
14710 b->entity_kind_name (), b->cluster,
14711 TREE_CODE (decl), decl);
14712
14713 sec.u (ct_decl);
14714 sec.tree_node (decl);
14715
14716 dump () && dump ("Wrote declaration entity:%u %C:%N",
14717 b->cluster, TREE_CODE (decl), decl);
14718 break;
14719 }
14720 }
14721
14722 depset *namer = NULL;
14723
14724 /* Write out definitions */
14725 for (unsigned ix = 0; ix != size; ix++)
14726 {
14727 depset *b = scc[ix];
14728 tree decl = b->get_entity ();
14729 switch (b->get_entity_kind ())
14730 {
14731 default:
14732 break;
14733
14734 case depset::EK_SPECIALIZATION:
14735 case depset::EK_PARTIAL:
14736 case depset::EK_DECL:
14737 if (!namer)
14738 namer = b;
14739
14740 if (b->has_defn ())
14741 {
14742 sec.u (ct_defn);
14743 sec.tree_node (decl);
14744 dump () && dump ("Writing definition %N", decl);
14745 sec.write_definition (decl);
14746
14747 if (!namer->has_defn ())
14748 namer = b;
14749 }
14750 break;
14751 }
14752 }
14753
14754 /* We don't find the section by name. Use depset's decl's name for
14755 human friendliness. */
14756 unsigned name = 0;
14757 tree naming_decl = NULL_TREE;
14758 if (namer)
14759 {
14760 naming_decl = namer->get_entity ();
14761 if (namer->get_entity_kind () == depset::EK_USING)
14762 /* This unfortunately names the section from the target of the
14763 using decl. But the name is only a guide, so Do Not Care. */
14764 naming_decl = OVL_FUNCTION (naming_decl);
14765 if (DECL_IMPLICIT_TYPEDEF_P (naming_decl))
14766 /* Lose any anonymousness. */
14767 naming_decl = TYPE_NAME (TREE_TYPE (naming_decl));
14768 name = to->qualified_name (naming_decl, namer->has_defn ());
14769 }
14770
14771 unsigned bytes = sec.pos;
14772 unsigned snum = sec.end (to, name, crc_ptr);
14773
14774 for (unsigned ix = size; ix--;)
14775 gcc_checking_assert (scc[ix]->section == snum);
14776
14777 dump.outdent ();
14778 dump () && dump ("Wrote section:%u named-by:%N", table.section, naming_decl);
14779
14780 return bytes;
14781 }
14782
14783 /* Read a cluster from section SNUM. */
14784
14785 bool
14786 module_state::read_cluster (unsigned snum)
14787 {
14788 trees_in sec (this);
14789
14790 if (!sec.begin (loc, from (), snum))
14791 return false;
14792
14793 dump () && dump ("Reading section:%u", snum);
14794 dump.indent ();
14795
14796 /* We care about structural equality. */
14797 comparing_specializations++;
14798
14799 /* First seed the imports. */
14800 while (tree import = sec.tree_node ())
14801 dump (dumper::CLUSTER) && dump ("Seeded import %N", import);
14802
14803 while (!sec.get_overrun () && sec.more_p ())
14804 {
14805 unsigned ct = sec.u ();
14806 switch (ct)
14807 {
14808 default:
14809 sec.set_overrun ();
14810 break;
14811
14812 case ct_bind:
14813 /* A set of namespace bindings. */
14814 {
14815 tree ns = sec.tree_node ();
14816 tree name = sec.tree_node ();
14817 tree decls = NULL_TREE;
14818 tree visible = NULL_TREE;
14819 tree type = NULL_TREE;
14820 bool dedup = false;
14821
14822 /* We rely on the bindings being in the reverse order of
14823 the resulting overload set. */
14824 for (;;)
14825 {
14826 int flags = sec.i ();
14827 if (flags < 0)
14828 break;
14829
14830 if ((flags & cbf_hidden)
14831 && (flags & (cbf_using | cbf_export)))
14832 sec.set_overrun ();
14833
14834 tree decl = sec.tree_node ();
14835 if (sec.get_overrun ())
14836 break;
14837
14838 if (decls && TREE_CODE (decl) == TYPE_DECL)
14839 {
14840 /* Stat hack. */
14841 if (type || !DECL_IMPLICIT_TYPEDEF_P (decl))
14842 sec.set_overrun ();
14843 type = decl;
14844 }
14845 else
14846 {
14847 if (decls
14848 || (flags & (cbf_hidden | cbf_wrapped))
14849 || DECL_FUNCTION_TEMPLATE_P (decl))
14850 {
14851 decls = ovl_make (decl, decls);
14852 if (flags & cbf_using)
14853 {
14854 dedup = true;
14855 OVL_USING_P (decls) = true;
14856 if (flags & cbf_export)
14857 OVL_EXPORT_P (decls) = true;
14858 }
14859
14860 if (flags & cbf_hidden)
14861 OVL_HIDDEN_P (decls) = true;
14862 else if (dedup)
14863 OVL_DEDUP_P (decls) = true;
14864 }
14865 else
14866 decls = decl;
14867
14868 if (flags & cbf_export
14869 || (!(flags & cbf_hidden)
14870 && (is_module () || is_partition ())))
14871 visible = decls;
14872 }
14873 }
14874
14875 if (!decls)
14876 sec.set_overrun ();
14877
14878 if (sec.get_overrun ())
14879 break; /* Bail. */
14880
14881 dump () && dump ("Binding of %P", ns, name);
14882 if (!set_module_binding (ns, name, mod,
14883 is_header () ? -1
14884 : is_module () || is_partition () ? 1
14885 : 0,
14886 decls, type, visible))
14887 sec.set_overrun ();
14888
14889 if (type
14890 && CP_DECL_CONTEXT (type) == ns
14891 && !sec.is_duplicate (type))
14892 add_module_decl (ns, name, type);
14893
14894 for (ovl_iterator iter (decls); iter; ++iter)
14895 if (!iter.using_p ())
14896 {
14897 tree decl = *iter;
14898 if (CP_DECL_CONTEXT (decl) == ns
14899 && !sec.is_duplicate (decl))
14900 add_module_decl (ns, name, decl);
14901 }
14902 }
14903 break;
14904
14905 case ct_decl:
14906 /* A decl. */
14907 {
14908 tree decl = sec.tree_node ();
14909 dump () && dump ("Read declaration of %N", decl);
14910 }
14911 break;
14912
14913 case ct_defn:
14914 {
14915 tree decl = sec.tree_node ();
14916 dump () && dump ("Reading definition of %N", decl);
14917 sec.read_definition (decl);
14918 }
14919 break;
14920 }
14921 }
14922
14923 /* When lazy loading is in effect, we can be in the middle of
14924 parsing or instantiating a function. Save it away.
14925 push_function_context does too much work. */
14926 tree old_cfd = current_function_decl;
14927 struct function *old_cfun = cfun;
14928 while (tree decl = sec.post_process ())
14929 {
14930 bool abstract = false;
14931 if (TREE_CODE (decl) == TEMPLATE_DECL)
14932 {
14933 abstract = true;
14934 decl = DECL_TEMPLATE_RESULT (decl);
14935 }
14936
14937 current_function_decl = decl;
14938 allocate_struct_function (decl, abstract);
14939 cfun->language = ggc_cleared_alloc<language_function> ();
14940 cfun->language->base.x_stmt_tree.stmts_are_full_exprs_p = 1;
14941
14942 if (abstract)
14943 ;
14944 else if (DECL_ABSTRACT_P (decl))
14945 {
14946 bool cloned = maybe_clone_body (decl);
14947 if (!cloned)
14948 from ()->set_error ();
14949 }
14950 else
14951 {
14952 bool aggr = aggregate_value_p (DECL_RESULT (decl), decl);
14953 #ifdef PCC_STATIC_STRUCT_RETURN
14954 cfun->returns_pcc_struct = aggr;
14955 #endif
14956 cfun->returns_struct = aggr;
14957
14958 if (DECL_COMDAT (decl))
14959 // FIXME: Comdat grouping?
14960 comdat_linkage (decl);
14961 note_vague_linkage_fn (decl);
14962 cgraph_node::finalize_function (decl, true);
14963 }
14964
14965 }
14966 /* Look, function.c's interface to cfun does too much for us, we
14967 just need to restore the old value. I do not want to go
14968 redesigning that API right now. */
14969 #undef cfun
14970 cfun = old_cfun;
14971 current_function_decl = old_cfd;
14972 comparing_specializations--;
14973
14974 dump.outdent ();
14975 dump () && dump ("Read section:%u", snum);
14976
14977 loaded_clusters++;
14978
14979 if (!sec.end (from ()))
14980 return false;
14981
14982 return true;
14983 }
14984
14985 void
14986 module_state::write_namespace (bytes_out &sec, depset *dep)
14987 {
14988 unsigned ns_num = dep->cluster;
14989 unsigned ns_import = 0;
14990
14991 if (dep->is_import ())
14992 ns_import = dep->section;
14993 else if (dep->get_entity () != global_namespace)
14994 ns_num++;
14995
14996 sec.u (ns_import);
14997 sec.u (ns_num);
14998 }
14999
15000 tree
15001 module_state::read_namespace (bytes_in &sec)
15002 {
15003 unsigned ns_import = sec.u ();
15004 unsigned ns_num = sec.u ();
15005 tree ns = NULL_TREE;
15006
15007 if (ns_import || ns_num)
15008 {
15009 if (!ns_import)
15010 ns_num--;
15011
15012 if (unsigned origin = slurp->remap_module (ns_import))
15013 {
15014 module_state *from = (*modules)[origin];
15015 if (ns_num < from->entity_num)
15016 {
15017 binding_slot &slot = (*entity_ary)[from->entity_lwm + ns_num];
15018
15019 if (!slot.is_lazy ())
15020 ns = slot;
15021 }
15022 }
15023 else
15024 sec.set_overrun ();
15025 }
15026 else
15027 ns = global_namespace;
15028
15029 return ns;
15030 }
15031
15032 /* SPACES is a sorted vector of namespaces. Write out the namespaces
15033 to MOD_SNAME_PFX.nms section. */
15034
15035 void
15036 module_state::write_namespaces (elf_out *to, vec<depset *> spaces,
15037 unsigned num, unsigned *crc_p)
15038 {
15039 dump () && dump ("Writing namespaces");
15040 dump.indent ();
15041
15042 bytes_out sec (to);
15043 sec.begin ();
15044
15045 for (unsigned ix = 0; ix != num; ix++)
15046 {
15047 depset *b = spaces[ix];
15048 tree ns = b->get_entity ();
15049
15050 gcc_checking_assert (TREE_CODE (ns) == NAMESPACE_DECL);
15051
15052 bool export_p = DECL_MODULE_EXPORT_P (ns);
15053 bool inline_p = DECL_NAMESPACE_INLINE_P (ns);
15054 bool public_p = TREE_PUBLIC (ns);
15055
15056 /* We should only be naming public namespaces, or our own
15057 private ones. Internal linkage ones never get to be written
15058 out -- because that means something erroneously referred to a
15059 member. However, Davis Herring's paper probably changes that
15060 by permitting them to be written out, but then an error if on
15061 touches them. (Certain cases cannot be detected until that
15062 point.) */
15063 gcc_checking_assert (public_p || !DECL_MODULE_IMPORT_P (ns));
15064 unsigned flags = 0;
15065 if (export_p)
15066 flags |= 1;
15067 if (inline_p)
15068 flags |= 2;
15069 if (public_p)
15070 flags |= 4;
15071 dump () && dump ("Writing namespace:%u %N%s%s%s",
15072 b->cluster, ns, export_p ? ", export" : "",
15073 public_p ? ", public" : "",
15074 inline_p ? ", inline" : "");
15075 sec.u (b->cluster);
15076 sec.u (to->name (DECL_NAME (ns)));
15077 write_namespace (sec, b->deps[0]);
15078
15079 /* Don't use bools, because this can be near the end of the
15080 section, and it won't save anything anyway. */
15081 sec.u (flags);
15082 write_location (sec, DECL_SOURCE_LOCATION (ns));
15083 }
15084
15085 sec.end (to, to->name (MOD_SNAME_PFX ".nms"), crc_p);
15086 dump.outdent ();
15087 }
15088
15089 /* Read the namespace hierarchy from MOD_SNAME_PFX.namespace. Fill in
15090 SPACES from that data. */
15091
15092 bool
15093 module_state::read_namespaces (unsigned num)
15094 {
15095 bytes_in sec;
15096
15097 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".nms"))
15098 return false;
15099
15100 dump () && dump ("Reading namespaces");
15101 dump.indent ();
15102
15103 for (unsigned ix = 0; ix != num; ix++)
15104 {
15105 unsigned entity_index = sec.u ();
15106 unsigned name = sec.u ();
15107
15108 tree parent = read_namespace (sec);
15109
15110 /* See comment in write_namespace about why not bits. */
15111 unsigned flags = sec.u ();
15112 location_t src_loc = read_location (sec);
15113
15114 if (entity_index >= entity_num || !parent)
15115 sec.set_overrun ();
15116 if (sec.get_overrun ())
15117 break;
15118
15119 tree id = name ? get_identifier (from ()->name (name)) : NULL_TREE;
15120 bool public_p = flags & 4;
15121 bool inline_p = flags & 2;
15122 bool export_p = flags & 1;
15123
15124 dump () && dump ("Read namespace:%u %P%s%s%s",
15125 entity_index, parent, id, export_p ? ", export" : "",
15126 public_p ? ", public" : "",
15127 inline_p ? ", inline" : "");
15128 bool visible_p = (export_p
15129 || (public_p && (is_partition () || is_module ())));
15130 tree inner = add_imported_namespace (parent, id, mod,
15131 src_loc, visible_p, inline_p);
15132 if (export_p && is_partition ())
15133 DECL_MODULE_EXPORT_P (inner) = true;
15134
15135 /* Install the namespace. */
15136 (*entity_ary)[entity_lwm + entity_index] = inner;
15137 if (DECL_MODULE_IMPORT_P (inner))
15138 {
15139 bool existed;
15140 unsigned *slot = &entity_map->get_or_insert
15141 (DECL_UID (inner), &existed);
15142 if (existed)
15143 /* If it existed, it should match. */
15144 gcc_checking_assert (inner == (*entity_ary)[*slot]);
15145 else
15146 *slot = entity_lwm + entity_index;
15147 }
15148 }
15149 dump.outdent ();
15150 if (!sec.end (from ()))
15151 return false;
15152 return true;
15153 }
15154
15155 /* Write the binding TABLE to MOD_SNAME_PFX.bnd */
15156
15157 unsigned
15158 module_state::write_bindings (elf_out *to, vec<depset *> sccs, unsigned *crc_p)
15159 {
15160 dump () && dump ("Writing binding table");
15161 dump.indent ();
15162
15163 unsigned num = 0;
15164 bytes_out sec (to);
15165 sec.begin ();
15166
15167 for (unsigned ix = 0; ix != sccs.length (); ix++)
15168 {
15169 depset *b = sccs[ix];
15170 if (b->is_binding ())
15171 {
15172 tree ns = b->get_entity ();
15173 dump () && dump ("Bindings %P section:%u", ns, b->get_name (),
15174 b->section);
15175 sec.u (to->name (b->get_name ()));
15176 write_namespace (sec, b->deps[0]);
15177 sec.u (b->section);
15178 num++;
15179 }
15180 }
15181
15182 sec.end (to, to->name (MOD_SNAME_PFX ".bnd"), crc_p);
15183 dump.outdent ();
15184
15185 return num;
15186 }
15187
15188 /* Read the binding table from MOD_SNAME_PFX.bind. */
15189
15190 bool
15191 module_state::read_bindings (unsigned num, unsigned lwm, unsigned hwm)
15192 {
15193 bytes_in sec;
15194
15195 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".bnd"))
15196 return false;
15197
15198 dump () && dump ("Reading binding table");
15199 dump.indent ();
15200 for (; !sec.get_overrun () && num--;)
15201 {
15202 const char *name = from ()->name (sec.u ());
15203 tree ns = read_namespace (sec);
15204 unsigned snum = sec.u ();
15205
15206 if (!ns || !name || (snum - lwm) >= (hwm - lwm))
15207 sec.set_overrun ();
15208 if (!sec.get_overrun ())
15209 {
15210 tree id = get_identifier (name);
15211 dump () && dump ("Bindings %P section:%u", ns, id, snum);
15212 if (mod && !import_module_binding (ns, id, mod, snum))
15213 break;
15214 }
15215 }
15216
15217 dump.outdent ();
15218 if (!sec.end (from ()))
15219 return false;
15220 return true;
15221 }
15222
15223 /* Write the entity table to MOD_SNAME_PFX.ent
15224
15225 Each entry is a section number. */
15226
15227 void
15228 module_state::write_entities (elf_out *to, vec<depset *> depsets,
15229 unsigned count, unsigned *crc_p)
15230 {
15231 dump () && dump ("Writing entities");
15232 dump.indent ();
15233
15234 bytes_out sec (to);
15235 sec.begin ();
15236
15237 unsigned current = 0;
15238 for (unsigned ix = 0; ix < depsets.length (); ix++)
15239 {
15240 depset *d = depsets[ix];
15241
15242 switch (d->get_entity_kind ())
15243 {
15244 default:
15245 break;
15246
15247 case depset::EK_NAMESPACE:
15248 if (!d->is_import () && d->get_entity () != global_namespace)
15249 {
15250 gcc_checking_assert (d->cluster == current);
15251 current++;
15252 sec.u (0);
15253 }
15254 break;
15255
15256 case depset::EK_DECL:
15257 case depset::EK_SPECIALIZATION:
15258 case depset::EK_PARTIAL:
15259 gcc_checking_assert (!d->is_unreached ()
15260 && !d->is_import ()
15261 && d->cluster == current
15262 && d->section);
15263 current++;
15264 sec.u (d->section);
15265 break;
15266 }
15267 }
15268 gcc_assert (count == current);
15269 sec.end (to, to->name (MOD_SNAME_PFX ".ent"), crc_p);
15270 dump.outdent ();
15271 }
15272
15273 bool
15274 module_state::read_entities (unsigned count, unsigned lwm, unsigned hwm)
15275 {
15276 trees_in sec (this);
15277
15278 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".ent"))
15279 return false;
15280
15281 dump () && dump ("Reading entities");
15282 dump.indent ();
15283
15284 vec_safe_reserve (entity_ary, count);
15285 unsigned ix;
15286 for (ix = 0; ix != count; ix++)
15287 {
15288 unsigned snum = sec.u ();
15289 if (snum && (snum - lwm) >= (hwm - lwm))
15290 sec.set_overrun ();
15291 if (sec.get_overrun ())
15292 break;
15293
15294 binding_slot slot;
15295 slot.u.binding = NULL_TREE;
15296 if (snum)
15297 slot.set_lazy (snum << 2);
15298 entity_ary->quick_push (slot);
15299 }
15300 entity_num = ix;
15301
15302 dump.outdent ();
15303 if (!sec.end (from ()))
15304 return false;
15305 return true;
15306 }
15307
15308 /* Write the pending table to MOD_SNAME_PFX.pnd
15309
15310 Specializations & partials are keyed to their primary template.
15311 Members are keyed to their context.
15312
15313 For specializations & partials, primary templates are keyed to the
15314 (namespace name) of their originating decl (because that's the only
15315 handle we have). */
15316
15317 void
15318 module_state::write_pendings (elf_out *to, vec<depset *> depsets,
15319 depset::hash &table,
15320 unsigned count, unsigned *crc_p)
15321 {
15322 dump () && dump ("Writing %u pendings", count);
15323 dump.indent ();
15324
15325 trees_out sec (to, this, table);
15326 sec.begin ();
15327
15328 for (unsigned ix = 0; ix < depsets.length (); ix++)
15329 {
15330 depset *d = depsets[ix];
15331 depset::entity_kind kind = d->get_entity_kind ();
15332 tree key = NULL_TREE;
15333 bool is_spec = false;
15334
15335
15336 if (kind == depset::EK_SPECIALIZATION)
15337 {
15338 is_spec = true;
15339 key = reinterpret_cast <spec_entry *> (d->deps[0])->tmpl;
15340 }
15341 else if (kind == depset::EK_PARTIAL)
15342 {
15343 is_spec = true;
15344 key = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (d->get_entity ()));
15345 }
15346 else if (kind == depset::EK_DECL && d->is_member ())
15347 {
15348 tree ctx = DECL_CONTEXT (d->get_entity ());
15349 key = TYPE_NAME (ctx);
15350 if (tree ti = CLASSTYPE_TEMPLATE_INFO (ctx))
15351 if (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == key)
15352 key = TI_TEMPLATE (ti);
15353 }
15354
15355 // FIXME:OPTIMIZATION More than likely when there is one pending
15356 // member, there will be others. All written in the same
15357 // section and keyed to the same class. We only need to record
15358 // one of them. The same is not true for specializations
15359
15360 if (key)
15361 {
15362 gcc_checking_assert (!d->is_import ());
15363
15364 {
15365 /* Key the entity to its key. */
15366 depset *key_dep = table.find_dependency (key);
15367 if (key_dep->get_entity_kind () == depset::EK_REDIRECT)
15368 key_dep = key_dep->deps[0];
15369 unsigned key_origin
15370 = key_dep->is_import () ? key_dep->section : 0;
15371 sec.u (key_origin);
15372 sec.u (key_dep->cluster);
15373 sec.u (d->cluster);
15374 dump () && dump ("%s %N entity:%u keyed to %M[%u] %N",
15375 is_spec ? "Specialization" : "Member",
15376 d->get_entity (),
15377 d->cluster, (*modules)[key_origin],
15378 key_dep->cluster, key);
15379 }
15380
15381 if (is_spec)
15382 {
15383 /* Key the general template to the originating decl. */
15384 tree origin = get_originating_module_decl (key);
15385 sec.tree_node (CP_DECL_CONTEXT (origin));
15386 sec.tree_node (DECL_NAME (origin));
15387
15388 unsigned origin_ident = import_entity_index (origin);
15389 module_state *origin_from = this;
15390 if (!(origin_ident & ~(~0u>>1)))
15391 origin_from = import_entity_module (origin_ident);
15392 sec.u (origin_from->remap);
15393 }
15394 else
15395 sec.tree_node (NULL);
15396 count--;
15397 }
15398 }
15399 gcc_assert (!count);
15400 sec.end (to, to->name (MOD_SNAME_PFX ".pnd"), crc_p);
15401 dump.outdent ();
15402 }
15403
15404 bool
15405 module_state::read_pendings (unsigned count)
15406 {
15407 trees_in sec (this);
15408
15409 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".pnd"))
15410 return false;
15411
15412 dump () && dump ("Reading %u pendings", count);
15413 dump.indent ();
15414
15415 for (unsigned ix = 0; ix != count; ix++)
15416 {
15417 unsigned key_origin = slurp->remap_module (sec.u ());
15418 unsigned key_index = sec.u ();
15419 unsigned ent_index = sec.u ();
15420 module_state *from = (*modules)[key_origin];
15421 tree ns = sec.tree_node ();
15422
15423 if (!key_origin
15424 || key_index >= from->entity_num || ent_index >= entity_num
15425 || (ns && TREE_CODE (ns) != NAMESPACE_DECL))
15426 sec.set_overrun ();
15427
15428 if (sec.get_overrun ())
15429 break;
15430
15431 bool loaded = false;
15432 dump () && dump ("%s keyed to %M[%u] entity:%u",
15433 ns ? "Specialization" : "Member",
15434 from, key_index, ent_index);
15435 unsigned key_ident = from->entity_lwm + key_index;
15436 if (pending_table->add (ns ? key_ident : ~key_ident,
15437 ent_index + entity_lwm))
15438 {
15439 binding_slot &slot = (*entity_ary)[key_ident];
15440 if (slot.is_lazy ())
15441 slot.or_lazy (ns ? 1 : 2);
15442 else
15443 {
15444 tree key = slot;
15445
15446 loaded = true;
15447 if (ns)
15448 {
15449 if (key && TREE_CODE (key) == TEMPLATE_DECL)
15450 DECL_MODULE_PENDING_SPECIALIZATIONS_P (key) = true;
15451 else
15452 sec.set_overrun ();
15453 }
15454 else
15455 {
15456 if (key && TREE_CODE (key) == TYPE_DECL)
15457 DECL_MODULE_PENDING_MEMBERS_P (key) = true;
15458 else
15459 sec.set_overrun ();
15460 }
15461 }
15462 }
15463
15464 if (ns)
15465 {
15466 /* We also need to mark the namespace binding of the
15467 originating template, so we know to set its pending
15468 specializations flag, when we load it. */
15469 tree name = sec.tree_node ();
15470 unsigned origin = slurp->remap_module (sec.u ());
15471 if (!origin || !name || TREE_CODE (name) != IDENTIFIER_NODE)
15472 sec.set_overrun ();
15473 if (sec.get_overrun ())
15474 break;
15475
15476 module_state *origin_from = (*modules)[origin];
15477 if (!loaded
15478 && (origin_from->is_header ()
15479 || (origin_from->is_partition ()
15480 || origin_from->is_module ())))
15481 note_pending_specializations (ns, name, origin_from->is_header ());
15482 }
15483 }
15484
15485 dump.outdent ();
15486 if (!sec.end (from ()))
15487 return false;
15488 return true;
15489 }
15490
15491 /* Return true if module MOD cares about lazy specializations keyed to
15492 possibly duplicated entity bindings. */
15493
15494 bool
15495 lazy_specializations_p (unsigned mod, bool header_p, bool partition_p)
15496 {
15497 module_state *module = (*modules)[mod];
15498
15499 if (module->is_header ())
15500 return header_p;
15501
15502 if (module->is_module () || module->is_partition ())
15503 return partition_p;
15504
15505 return false;
15506 }
15507
15508 /* Read & write locations. */
15509 enum loc_kind {
15510 LK_ORDINARY,
15511 LK_MACRO,
15512 LK_IMPORT_ORDINARY,
15513 LK_IMPORT_MACRO,
15514 LK_ADHOC,
15515 LK_RESERVED,
15516 };
15517
15518 static const module_state *
15519 module_for_ordinary_loc (location_t loc)
15520 {
15521 unsigned pos = 1;
15522 unsigned len = modules->length () - pos;
15523
15524 while (len)
15525 {
15526 unsigned half = len / 2;
15527 module_state *probe = (*modules)[pos + half];
15528 if (loc < probe->ordinary_locs.first)
15529 len = half;
15530 else if (loc < probe->ordinary_locs.second)
15531 return probe;
15532 else
15533 {
15534 pos += half + 1;
15535 len = len - (half + 1);
15536 }
15537 }
15538
15539 return NULL;
15540 }
15541
15542 static const module_state *
15543 module_for_macro_loc (location_t loc)
15544 {
15545 unsigned pos = 1;
15546 unsigned len = modules->length () - pos;
15547
15548 while (len)
15549 {
15550 unsigned half = len / 2;
15551 module_state *probe = (*modules)[pos + half];
15552 if (loc >= probe->macro_locs.second)
15553 len = half;
15554 else if (loc >= probe->macro_locs.first)
15555 return probe;
15556 else
15557 {
15558 pos += half + 1;
15559 len = len - (half + 1);
15560 }
15561 }
15562
15563 return NULL;
15564 }
15565
15566 location_t
15567 module_state::imported_from () const
15568 {
15569 location_t from = loc;
15570 line_map_ordinary const *fmap
15571 = linemap_check_ordinary (linemap_lookup (line_table, from));
15572
15573 if (MAP_MODULE_P (fmap))
15574 from = linemap_included_from (fmap);
15575
15576 return from;
15577 }
15578
15579 /* If we're not streaming, record that we need location LOC.
15580 Otherwise stream it. */
15581
15582 void
15583 module_state::write_location (bytes_out &sec, location_t loc)
15584 {
15585 if (!sec.streaming_p ())
15586 /* This is where we should note we use this location. See comment
15587 about write_ordinary_maps. */
15588 return;
15589
15590 if (loc < RESERVED_LOCATION_COUNT)
15591 {
15592 dump (dumper::LOCATION) && dump ("Reserved location %u", unsigned (loc));
15593 sec.u (LK_RESERVED + loc);
15594 }
15595 else if (IS_ADHOC_LOC (loc))
15596 {
15597 dump (dumper::LOCATION) && dump ("Adhoc location");
15598 sec.u (LK_ADHOC);
15599 location_t locus = get_location_from_adhoc_loc (line_table, loc);
15600 write_location (sec, locus);
15601 source_range range = get_range_from_loc (line_table, loc);
15602 if (range.m_start == locus)
15603 /* Compress. */
15604 range.m_start = UNKNOWN_LOCATION;
15605 write_location (sec, range.m_start);
15606 write_location (sec, range.m_finish);
15607 }
15608 else if (IS_MACRO_LOC (loc))
15609 {
15610 if (const loc_spans::span *span = spans.macro (loc))
15611 {
15612 unsigned off = MAX_LOCATION_T - loc;
15613
15614 off -= span->macro_delta;
15615
15616 sec.u (LK_MACRO);
15617 sec.u (off);
15618 dump (dumper::LOCATION)
15619 && dump ("Macro location %u output %u", loc, off);
15620 }
15621 else if (const module_state *import = module_for_macro_loc (loc))
15622 {
15623 unsigned off = import->macro_locs.second - loc - 1;
15624 sec.u (LK_IMPORT_MACRO);
15625 sec.u (import->remap);
15626 sec.u (off);
15627 dump (dumper::LOCATION)
15628 && dump ("Imported macro location %u output %u:%u",
15629 loc, import->remap, off);
15630 }
15631 else
15632 gcc_unreachable ();
15633 }
15634 else if (IS_ORDINARY_LOC (loc))
15635 {
15636 if (const loc_spans::span *span = spans.ordinary (loc))
15637 {
15638 unsigned off = loc;
15639
15640 off += span->ordinary_delta;
15641 sec.u (LK_ORDINARY);
15642 sec.u (off);
15643
15644 dump (dumper::LOCATION)
15645 && dump ("Ordinary location %u output %u", loc, off);
15646 }
15647 else if (const module_state *import = module_for_ordinary_loc (loc))
15648 {
15649 unsigned off = loc - import->ordinary_locs.first;
15650 sec.u (LK_IMPORT_ORDINARY);
15651 sec.u (import->remap);
15652 sec.u (off);
15653 dump (dumper::LOCATION)
15654 && dump ("Imported ordinary location %u output %u:%u",
15655 import->remap, import->remap, off);
15656 }
15657 else
15658 gcc_unreachable ();
15659 }
15660 else
15661 gcc_unreachable ();
15662 }
15663
15664 location_t
15665 module_state::read_location (bytes_in &sec) const
15666 {
15667 location_t locus = UNKNOWN_LOCATION;
15668 unsigned kind = sec.u ();
15669 switch (kind)
15670 {
15671 default:
15672 {
15673 if (kind < LK_RESERVED + RESERVED_LOCATION_COUNT)
15674 locus = location_t (kind - LK_RESERVED);
15675 else
15676 sec.set_overrun ();
15677 dump (dumper::LOCATION)
15678 && dump ("Reserved location %u", unsigned (locus));
15679 }
15680 break;
15681
15682 case LK_ADHOC:
15683 {
15684 dump (dumper::LOCATION) && dump ("Adhoc location");
15685 locus = read_location (sec);
15686 source_range range;
15687 range.m_start = read_location (sec);
15688 if (range.m_start == UNKNOWN_LOCATION)
15689 range.m_start = locus;
15690 range.m_finish = read_location (sec);
15691 if (locus != loc && range.m_start != loc && range.m_finish != loc)
15692 locus = get_combined_adhoc_loc (line_table, locus, range, NULL);
15693 }
15694 break;
15695
15696 case LK_MACRO:
15697 {
15698 unsigned off = sec.u ();
15699
15700 if (macro_locs.first)
15701 {
15702 location_t adjusted = MAX_LOCATION_T - off;
15703 adjusted -= slurp->loc_deltas.second;
15704 if (adjusted < macro_locs.first)
15705 sec.set_overrun ();
15706 else if (adjusted < macro_locs.second)
15707 locus = adjusted;
15708 else
15709 sec.set_overrun ();
15710 }
15711 else
15712 locus = loc;
15713 dump (dumper::LOCATION)
15714 && dump ("Macro %u becoming %u", off, locus);
15715 }
15716 break;
15717
15718 case LK_ORDINARY:
15719 {
15720 unsigned off = sec.u ();
15721 if (ordinary_locs.second)
15722 {
15723 location_t adjusted = off;
15724
15725 adjusted += slurp->loc_deltas.first;
15726 if (adjusted >= ordinary_locs.second)
15727 sec.set_overrun ();
15728 else if (adjusted >= ordinary_locs.first)
15729 locus = adjusted;
15730 else if (adjusted < spans.main_start ())
15731 locus = off;
15732 }
15733 else
15734 locus = loc;
15735
15736 dump (dumper::LOCATION)
15737 && dump ("Ordinary location %u becoming %u", off, locus);
15738 }
15739 break;
15740
15741 case LK_IMPORT_MACRO:
15742 case LK_IMPORT_ORDINARY:
15743 {
15744 unsigned mod = sec.u ();
15745 unsigned off = sec.u ();
15746 const module_state *import = NULL;
15747
15748 if (!mod && !slurp->remap)
15749 /* This is an early read of a partition location during the
15750 read of our ordinary location map. */
15751 import = this;
15752 else
15753 {
15754 mod = slurp->remap_module (mod);
15755 if (!mod)
15756 sec.set_overrun ();
15757 else
15758 import = (*modules)[mod];
15759 }
15760
15761 if (import)
15762 {
15763 if (kind == LK_IMPORT_MACRO)
15764 {
15765 if (!import->macro_locs.first)
15766 locus = import->loc;
15767 else if (off < import->macro_locs.second - macro_locs.first)
15768 locus = import->macro_locs.second - off - 1;
15769 else
15770 sec.set_overrun ();
15771 }
15772 else
15773 {
15774 if (!import->ordinary_locs.second)
15775 locus = import->loc;
15776 else if (off < (import->ordinary_locs.second
15777 - import->ordinary_locs.first))
15778 locus = import->ordinary_locs.first + off;
15779 else
15780 sec.set_overrun ();
15781 }
15782 }
15783 }
15784 break;
15785 }
15786
15787 return locus;
15788 }
15789
15790 /* Prepare the span adjustments. */
15791
15792 // FIXME:QOI I do not prune the unreachable locations. Modules with
15793 // textually-large GMFs could well cause us to run out of locations.
15794 // Regular single-file modules could also be affected. We should
15795 // determine which locations we need to represent, so that we do not
15796 // grab more locations than necessary. An example is in
15797 // write_macro_maps where we work around macro expansions that are not
15798 // covering any locations -- the macro expands to nothing. Perhaps we
15799 // should decompose locations so that we can have a more graceful
15800 // degradation upon running out?
15801
15802 location_map_info
15803 module_state::write_prepare_maps (module_state_config *)
15804 {
15805 dump () && dump ("Preparing locations");
15806 dump.indent ();
15807
15808 dump () && dump ("Reserved locations [%u,%u) macro [%u,%u)",
15809 spans[loc_spans::SPAN_RESERVED].ordinary.first,
15810 spans[loc_spans::SPAN_RESERVED].ordinary.second,
15811 spans[loc_spans::SPAN_RESERVED].macro.first,
15812 spans[loc_spans::SPAN_RESERVED].macro.second);
15813
15814 location_map_info info;
15815
15816 info.num_maps.first = info.num_maps.second = 0;
15817
15818 /* Figure the alignment of ordinary location spans. */
15819 unsigned max_range = 0;
15820 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
15821 {
15822 loc_spans::span &span = spans[ix];
15823 line_map_ordinary const *omap
15824 = linemap_check_ordinary (linemap_lookup (line_table,
15825 span.ordinary.first));
15826
15827 /* We should exactly match up. */
15828 gcc_checking_assert (MAP_START_LOCATION (omap) == span.ordinary.first);
15829
15830 line_map_ordinary const *fmap = omap;
15831 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
15832 {
15833 /* We should never find a module linemap in an interval. */
15834 gcc_checking_assert (!MAP_MODULE_P (omap));
15835
15836 if (max_range < omap->m_range_bits)
15837 max_range = omap->m_range_bits;
15838 }
15839
15840 unsigned count = omap - fmap;
15841 info.num_maps.first += count;
15842
15843 if (span.macro.first != span.macro.second)
15844 {
15845 count = linemap_lookup_macro_index (line_table, span.macro.first) + 1;
15846 count -= linemap_lookup_macro_index (line_table,
15847 span.macro.second - 1);
15848 dump (dumper::LOCATION) && dump ("Span:%u %u macro maps", ix, count);
15849 info.num_maps.second += count;
15850 }
15851 }
15852
15853 /* Adjust the maps. Ordinary ones ascend, and we must maintain
15854 alignment. Macro ones descend, but are unaligned. */
15855 location_t ord_off = spans[loc_spans::SPAN_FIRST].ordinary.first;
15856 location_t mac_off = spans[loc_spans::SPAN_FIRST].macro.second;
15857 location_t range_mask = (1u << max_range) - 1;
15858
15859 dump () && dump ("Ordinary maps range bits:%u, preserve:%x, zero:%u",
15860 max_range, ord_off & range_mask, ord_off & ~range_mask);
15861
15862 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
15863 {
15864 loc_spans::span &span = spans[ix];
15865
15866 span.macro_delta = mac_off - span.macro.second;
15867 mac_off -= span.macro.second - span.macro.first;
15868 dump () && dump ("Macro span:%u [%u,%u):%u->%d(%u)", ix,
15869 span.macro.first, span.macro.second,
15870 span.macro.second - span.macro.first,
15871 span.macro_delta, span.macro.first + span.macro_delta);
15872
15873 line_map_ordinary const *omap
15874 = linemap_check_ordinary (linemap_lookup (line_table,
15875 span.ordinary.first));
15876 location_t base = MAP_START_LOCATION (omap);
15877
15878 /* Preserve the low MAX_RANGE bits of base by incrementing ORD_OFF. */
15879 unsigned low_bits = base & range_mask;
15880 if ((ord_off & range_mask) > low_bits)
15881 low_bits += range_mask + 1;
15882 ord_off = (ord_off & ~range_mask) + low_bits;
15883 span.ordinary_delta = ord_off - base;
15884
15885 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
15886 {
15887 location_t start_loc = MAP_START_LOCATION (omap);
15888 unsigned to = start_loc + span.ordinary_delta;
15889 location_t end_loc = MAP_START_LOCATION (omap + 1);
15890
15891 dump () && dump ("Ordinary span:%u [%u,%u):%u->%d(%u)", ix, start_loc,
15892 end_loc, end_loc - start_loc,
15893 span.ordinary_delta, to);
15894
15895 /* There should be no change in the low order bits. */
15896 gcc_checking_assert (((start_loc ^ to) & range_mask) == 0);
15897 }
15898 /* The ending serialized value. */
15899 ord_off = span.ordinary.second + span.ordinary_delta;
15900 }
15901
15902 dump () && dump ("Ordinary hwm:%u macro lwm:%u", ord_off, mac_off);
15903
15904 dump.outdent ();
15905
15906 info.max_range = max_range;
15907
15908 return info;
15909 }
15910
15911 bool
15912 module_state::read_prepare_maps (const module_state_config *cfg)
15913 {
15914 location_t ordinary = line_table->highest_location + 1;
15915 ordinary = ((ordinary + (1u << cfg->ordinary_loc_align))
15916 & ~((1u << cfg->ordinary_loc_align) - 1));
15917 ordinary += cfg->ordinary_locs;
15918
15919 location_t macro = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
15920 macro -= cfg->macro_locs;
15921
15922 if (ordinary < LINE_MAP_MAX_LOCATION_WITH_COLS
15923 && macro >= LINE_MAP_MAX_LOCATION)
15924 /* OK, we have enough locations. */
15925 return true;
15926
15927 ordinary_locs.first = ordinary_locs.second = 0;
15928 macro_locs.first = macro_locs.second = 0;
15929
15930 static bool informed = false;
15931 if (!informed)
15932 {
15933 /* Just give the notice once. */
15934 informed = true;
15935 inform (loc, "unable to represent further imported source locations");
15936 }
15937
15938 return false;
15939 }
15940
15941 /* Write the location maps. This also determines the shifts for the
15942 location spans. */
15943
15944 void
15945 module_state::write_ordinary_maps (elf_out *to, location_map_info &info,
15946 module_state_config *cfg, bool has_partitions,
15947 unsigned *crc_p)
15948 {
15949 dump () && dump ("Writing ordinary location maps");
15950 dump.indent ();
15951
15952 vec<const char *> filenames;
15953 filenames.create (20);
15954
15955 /* Determine the unique filenames. */
15956 // FIXME:QOI We should find the set of filenames when working out
15957 // which locations we actually need. See write_prepare_maps.
15958 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
15959 {
15960 loc_spans::span &span = spans[ix];
15961 line_map_ordinary const *omap
15962 = linemap_check_ordinary (linemap_lookup (line_table,
15963 span.ordinary.first));
15964
15965 /* We should exactly match up. */
15966 gcc_checking_assert (MAP_START_LOCATION (omap) == span.ordinary.first);
15967
15968 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
15969 {
15970 const char *fname = ORDINARY_MAP_FILE_NAME (omap);
15971
15972 /* We should never find a module linemap in an interval. */
15973 gcc_checking_assert (!MAP_MODULE_P (omap));
15974
15975 /* We expect very few filenames, so just an array. */
15976 for (unsigned jx = filenames.length (); jx--;)
15977 {
15978 const char *name = filenames[jx];
15979 if (0 == strcmp (name, fname))
15980 {
15981 /* Reset the linemap's name, because for things like
15982 preprocessed input we could have multple
15983 instances of the same name, and we'd rather not
15984 percolate that. */
15985 const_cast<line_map_ordinary *> (omap)->to_file = name;
15986 fname = NULL;
15987 break;
15988 }
15989 }
15990 if (fname)
15991 filenames.safe_push (fname);
15992 }
15993 }
15994
15995 bytes_out sec (to);
15996 sec.begin ();
15997
15998 /* Write the filenames. */
15999 unsigned len = filenames.length ();
16000 sec.u (len);
16001 dump () && dump ("%u source file names", len);
16002 for (unsigned ix = 0; ix != len; ix++)
16003 {
16004 const char *fname = filenames[ix];
16005 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16006 sec.str (fname);
16007 }
16008
16009 location_t offset = spans[loc_spans::SPAN_FIRST].ordinary.first;
16010 location_t range_mask = (1u << info.max_range) - 1;
16011
16012 dump () && dump ("Ordinary maps:%u, range bits:%u, preserve:%x, zero:%u",
16013 info.num_maps.first, info.max_range, offset & range_mask,
16014 offset & ~range_mask);
16015 sec.u (info.num_maps.first); /* Num maps. */
16016 sec.u (info.max_range); /* Maximum range bits */
16017 sec.u (offset & range_mask); /* Bits to preserve. */
16018 sec.u (offset & ~range_mask);
16019
16020 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
16021 {
16022 loc_spans::span &span = spans[ix];
16023 line_map_ordinary const *omap
16024 = linemap_check_ordinary (linemap_lookup (line_table,
16025 span.ordinary.first));
16026 for (; MAP_START_LOCATION (omap) < span.ordinary.second; omap++)
16027 {
16028 location_t start_loc = MAP_START_LOCATION (omap);
16029 unsigned to = start_loc + span.ordinary_delta;
16030
16031 dump (dumper::LOCATION)
16032 && dump ("Span:%u ordinary [%u,%u)->%u", ix, start_loc,
16033 MAP_START_LOCATION (omap + 1), to);
16034
16035 /* There should be no change in the low order bits. */
16036 gcc_checking_assert (((start_loc ^ to) & range_mask) == 0);
16037 sec.u (to);
16038
16039 /* Making accessors just for here, seems excessive. */
16040 sec.u (omap->reason);
16041 sec.u (omap->sysp);
16042 sec.u (omap->m_range_bits);
16043 sec.u (omap->m_column_and_range_bits - omap->m_range_bits);
16044
16045 const char *fname = ORDINARY_MAP_FILE_NAME (omap);
16046 for (unsigned ix = 0; ix != filenames.length (); ix++)
16047 if (filenames[ix] == fname)
16048 {
16049 sec.u (ix);
16050 break;
16051 }
16052 sec.u (ORDINARY_MAP_STARTING_LINE_NUMBER (omap));
16053
16054 /* Write the included from location, which means reading it
16055 while reading in the ordinary maps. So we'd better not
16056 be getting ahead of ourselves. */
16057 location_t from = linemap_included_from (omap);
16058 gcc_checking_assert (from < MAP_START_LOCATION (omap));
16059 if (from != UNKNOWN_LOCATION && has_partitions)
16060 {
16061 /* A partition's span will have a from pointing at a
16062 MODULE_INC. Find that map's from. */
16063 line_map_ordinary const *fmap
16064 = linemap_check_ordinary (linemap_lookup (line_table, from));
16065 if (MAP_MODULE_P (fmap))
16066 from = linemap_included_from (fmap);
16067 }
16068 write_location (sec, from);
16069 }
16070 /* The ending serialized value. */
16071 offset = MAP_START_LOCATION (omap) + span.ordinary_delta;
16072 }
16073 dump () && dump ("Ordinary location hwm:%u", offset);
16074 sec.u (offset);
16075
16076 // Record number of locations and alignment.
16077 cfg->ordinary_loc_align = info.max_range;
16078 cfg->ordinary_locs = offset;
16079
16080 filenames.release ();
16081
16082 sec.end (to, to->name (MOD_SNAME_PFX ".olm"), crc_p);
16083 dump.outdent ();
16084 }
16085
16086 void
16087 module_state::write_macro_maps (elf_out *to, location_map_info &info,
16088 module_state_config *cfg, unsigned *crc_p)
16089 {
16090 dump () && dump ("Writing macro location maps");
16091 dump.indent ();
16092
16093 bytes_out sec (to);
16094 sec.begin ();
16095
16096 dump () && dump ("Macro maps:%u", info.num_maps.second);
16097 sec.u (info.num_maps.second);
16098
16099 location_t offset = spans[loc_spans::SPAN_FIRST].macro.second;
16100 sec.u (offset);
16101
16102 unsigned macro_num = 0;
16103 for (unsigned ix = loc_spans::SPAN_FIRST; ix != spans.length (); ix++)
16104 {
16105 loc_spans::span &span = spans[ix];
16106 if (span.macro.first == span.macro.second)
16107 continue;
16108
16109 for (unsigned first
16110 = linemap_lookup_macro_index (line_table, span.macro.second - 1);
16111 first < LINEMAPS_MACRO_USED (line_table);
16112 first++)
16113 {
16114 line_map_macro const *mmap
16115 = LINEMAPS_MACRO_MAP_AT (line_table, first);
16116 location_t start_loc = MAP_START_LOCATION (mmap);
16117 if (start_loc < span.macro.first)
16118 break;
16119 if (macro_num == info.num_maps.second)
16120 {
16121 /* We're ending on an empty macro expansion. The
16122 preprocessor doesn't prune such things. */
16123 // FIXME:QOI This is an example of the non-pruning of
16124 // locations. See write_prepare_maps.
16125 gcc_checking_assert (!mmap->n_tokens);
16126 continue;
16127 }
16128
16129 sec.u (offset);
16130 sec.u (mmap->n_tokens);
16131 sec.cpp_node (mmap->macro);
16132 write_location (sec, mmap->expansion);
16133 const location_t *locs = mmap->macro_locations;
16134 /* There are lots of identical runs. */
16135 location_t prev = UNKNOWN_LOCATION;
16136 unsigned count = 0;
16137 unsigned runs = 0;
16138 for (unsigned jx = mmap->n_tokens * 2; jx--;)
16139 {
16140 location_t tok_loc = locs[jx];
16141 if (tok_loc == prev)
16142 {
16143 count++;
16144 continue;
16145 }
16146 runs++;
16147 sec.u (count);
16148 count = 1;
16149 prev = tok_loc;
16150 write_location (sec, tok_loc);
16151 }
16152 sec.u (count);
16153 dump (dumper::LOCATION)
16154 && dump ("Span:%u macro:%u %I %u/%u*2 locations [%u,%u)->%u",
16155 ix, macro_num, identifier (mmap->macro),
16156 runs, mmap->n_tokens,
16157 start_loc, start_loc + mmap->n_tokens,
16158 start_loc + span.macro_delta);
16159 macro_num++;
16160 offset -= mmap->n_tokens;
16161 gcc_checking_assert (offset == start_loc + span.macro_delta);
16162 }
16163 }
16164 dump () && dump ("Macro location lwm:%u", offset);
16165 sec.u (offset);
16166 gcc_assert (macro_num == info.num_maps.second);
16167
16168 cfg->macro_locs = MAX_LOCATION_T + 1 - offset;
16169
16170 sec.end (to, to->name (MOD_SNAME_PFX ".mlm"), crc_p);
16171 dump.outdent ();
16172 }
16173
16174 bool
16175 module_state::read_ordinary_maps ()
16176 {
16177 bytes_in sec;
16178
16179 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".olm"))
16180 return false;
16181 dump () && dump ("Reading ordinary location maps");
16182 dump.indent ();
16183
16184 /* Read the filename table. */
16185 unsigned len = sec.u ();
16186 dump () && dump ("%u source file names", len);
16187 vec<const char *> filenames;
16188 filenames.create (len);
16189 for (unsigned ix = 0; ix != len; ix++)
16190 {
16191 size_t l;
16192 const char *buf = sec.str (&l);
16193 char *fname = XNEWVEC (char, l + 1);
16194 memcpy (fname, buf, l + 1);
16195 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16196 /* We leak these names into the line-map table. But it
16197 doesn't own them. */
16198 filenames.quick_push (fname);
16199 }
16200
16201 unsigned num_ordinary = sec.u ();
16202 unsigned max_range = sec.u ();
16203 unsigned low_bits = sec.u ();
16204 location_t zero = sec.u ();
16205 location_t range_mask = (1u << max_range) - 1;
16206
16207 dump () && dump ("Ordinary maps:%u, range bits:%u, preserve:%x, zero:%u",
16208 num_ordinary, max_range, low_bits, zero);
16209
16210 location_t offset = line_table->highest_location + 1;
16211 /* Ensure offset doesn't go backwards at the start. */
16212 if ((offset & range_mask) > low_bits)
16213 offset += range_mask + 1;
16214 offset = (offset & ~range_mask);
16215
16216 bool propagated = spans.maybe_propagate (this, offset + low_bits);
16217
16218 line_map_ordinary *maps = static_cast<line_map_ordinary *>
16219 (line_map_new_raw (line_table, false, num_ordinary));
16220
16221 location_t lwm = offset;
16222 slurp->loc_deltas.first = offset - zero;
16223 ordinary_locs.first = zero + low_bits + slurp->loc_deltas.first;
16224 dump () && dump ("Ordinary loc delta %d", slurp->loc_deltas.first);
16225
16226 for (unsigned ix = 0; ix != num_ordinary && !sec.get_overrun (); ix++)
16227 {
16228 line_map_ordinary *map = &maps[ix];
16229 unsigned hwm = sec.u ();
16230
16231 /* Record the current HWM so that the below read_location is
16232 ok. */
16233 ordinary_locs.second = hwm + slurp->loc_deltas.first;
16234 map->start_location = hwm + (offset - zero);
16235 if (map->start_location < lwm)
16236 sec.set_overrun ();
16237 lwm = map->start_location;
16238 dump (dumper::LOCATION) && dump ("Map:%u %u->%u", ix, hwm, lwm);
16239 map->reason = lc_reason (sec.u ());
16240 map->sysp = sec.u ();
16241 map->m_range_bits = sec.u ();
16242 map->m_column_and_range_bits = map->m_range_bits + sec.u ();
16243
16244 unsigned fnum = sec.u ();
16245 map->to_file = (fnum < filenames.length () ? filenames[fnum] : "");
16246 map->to_line = sec.u ();
16247
16248 /* Root the outermost map at our location. */
16249 location_t from = read_location (sec);
16250 map->included_from = from != UNKNOWN_LOCATION ? from : loc;
16251 }
16252
16253 location_t hwm = sec.u ();
16254 ordinary_locs.second = hwm + slurp->loc_deltas.first;
16255
16256 /* highest_location is the one handed out, not the next one to
16257 hand out. */
16258 line_table->highest_location = ordinary_locs.second - 1;
16259
16260 if (line_table->highest_location >= LINE_MAP_MAX_LOCATION_WITH_COLS)
16261 /* We shouldn't run out of locations, as we checked before
16262 starting. */
16263 sec.set_overrun ();
16264 dump () && dump ("Ordinary location hwm:%u", ordinary_locs.second);
16265
16266 if (propagated)
16267 spans.close ();
16268
16269 filenames.release ();
16270
16271 dump.outdent ();
16272 if (!sec.end (from ()))
16273 return false;
16274
16275 return true;
16276 }
16277
16278 bool
16279 module_state::read_macro_maps ()
16280 {
16281 bytes_in sec;
16282
16283 if (!sec.begin (loc, from (), MOD_SNAME_PFX ".mlm"))
16284 return false;
16285 dump () && dump ("Reading macro location maps");
16286 dump.indent ();
16287
16288 unsigned num_macros = sec.u ();
16289 location_t zero = sec.u ();
16290 dump () && dump ("Macro maps:%u zero:%u", num_macros, zero);
16291
16292 bool propagated = spans.maybe_propagate (this);
16293
16294 location_t offset = LINEMAPS_MACRO_LOWEST_LOCATION (line_table);
16295 slurp->loc_deltas.second = zero - offset;
16296 macro_locs.second = zero - slurp->loc_deltas.second;
16297 dump () && dump ("Macro loc delta %d", slurp->loc_deltas.second);
16298
16299 for (unsigned ix = 0; ix != num_macros && !sec.get_overrun (); ix++)
16300 {
16301 unsigned lwm = sec.u ();
16302 /* Record the current LWM so that the below read_location is
16303 ok. */
16304 macro_locs.first = lwm - slurp->loc_deltas.second;
16305
16306 unsigned n_tokens = sec.u ();
16307 cpp_hashnode *node = sec.cpp_node ();
16308 location_t exp_loc = read_location (sec);
16309
16310 const line_map_macro *macro
16311 = linemap_enter_macro (line_table, node, exp_loc, n_tokens);
16312 if (!macro)
16313 /* We shouldn't run out of locations, as we checked that we
16314 had enough before starting. */
16315 break;
16316
16317 location_t *locs = macro->macro_locations;
16318 location_t tok_loc = UNKNOWN_LOCATION;
16319 unsigned count = sec.u ();
16320 unsigned runs = 0;
16321 for (unsigned jx = macro->n_tokens * 2; jx-- && !sec.get_overrun ();)
16322 {
16323 while (!count-- && !sec.get_overrun ())
16324 {
16325 runs++;
16326 tok_loc = read_location (sec);
16327 count = sec.u ();
16328 }
16329 locs[jx] = tok_loc;
16330 }
16331 if (count)
16332 sec.set_overrun ();
16333 dump (dumper::LOCATION)
16334 && dump ("Macro:%u %I %u/%u*2 locations [%u,%u)",
16335 ix, identifier (node), runs, n_tokens,
16336 MAP_START_LOCATION (macro),
16337 MAP_START_LOCATION (macro) + n_tokens);
16338 }
16339 location_t lwm = sec.u ();
16340 macro_locs.first = lwm - slurp->loc_deltas.second;
16341
16342 dump () && dump ("Macro location lwm:%u", macro_locs.first);
16343
16344 if (propagated)
16345 spans.close ();
16346
16347 dump.outdent ();
16348 if (!sec.end (from ()))
16349 return false;
16350
16351 return true;
16352 }
16353
16354 /* Serialize the definition of MACRO. */
16355
16356 void
16357 module_state::write_define (bytes_out &sec, const cpp_macro *macro, bool located)
16358 {
16359 sec.u (macro->count);
16360
16361 sec.b (macro->fun_like);
16362 sec.b (macro->variadic);
16363 sec.b (macro->syshdr);
16364 sec.bflush ();
16365
16366 if (located)
16367 write_location (sec, macro->line);
16368 if (macro->fun_like)
16369 {
16370 sec.u (macro->paramc);
16371 const cpp_hashnode *const *parms = macro->parm.params;
16372 for (unsigned ix = 0; ix != macro->paramc; ix++)
16373 sec.cpp_node (parms[ix]);
16374 }
16375
16376 unsigned len = 0;
16377 for (unsigned ix = 0; ix != macro->count; ix++)
16378 {
16379 const cpp_token *token = &macro->exp.tokens[ix];
16380 if (located)
16381 write_location (sec, token->src_loc);
16382 sec.u (token->type);
16383 sec.u (token->flags);
16384 switch (cpp_token_val_index (token))
16385 {
16386 default:
16387 gcc_unreachable ();
16388
16389 case CPP_TOKEN_FLD_ARG_NO:
16390 /* An argument reference. */
16391 sec.u (token->val.macro_arg.arg_no);
16392 sec.cpp_node (token->val.macro_arg.spelling);
16393 break;
16394
16395 case CPP_TOKEN_FLD_NODE:
16396 /* An identifier. */
16397 sec.cpp_node (token->val.node.node);
16398 if (token->val.node.spelling == token->val.node.node)
16399 /* The spelling will usually be the same. so optimize
16400 that. */
16401 sec.str (NULL, 0);
16402 else
16403 sec.cpp_node (token->val.node.spelling);
16404 break;
16405
16406 case CPP_TOKEN_FLD_NONE:
16407 break;
16408
16409 case CPP_TOKEN_FLD_STR:
16410 /* A string, number or comment. Not always NUL terminated,
16411 we stream out in a single contatenation with embedded
16412 NULs as that's a safe default. */
16413 len += token->val.str.len + 1;
16414 sec.u (token->val.str.len);
16415 break;
16416
16417 case CPP_TOKEN_FLD_SOURCE:
16418 case CPP_TOKEN_FLD_TOKEN_NO:
16419 case CPP_TOKEN_FLD_PRAGMA:
16420 /* These do not occur inside a macro itself. */
16421 gcc_unreachable ();
16422 }
16423 }
16424
16425 if (len)
16426 {
16427 char *ptr = reinterpret_cast<char *> (sec.buf (len));
16428 len = 0;
16429 for (unsigned ix = 0; ix != macro->count; ix++)
16430 {
16431 const cpp_token *token = &macro->exp.tokens[ix];
16432 if (cpp_token_val_index (token) == CPP_TOKEN_FLD_STR)
16433 {
16434 memcpy (ptr + len, token->val.str.text,
16435 token->val.str.len);
16436 len += token->val.str.len;
16437 ptr[len++] = 0;
16438 }
16439 }
16440 }
16441 }
16442
16443 /* Read a macro definition. */
16444
16445 cpp_macro *
16446 module_state::read_define (bytes_in &sec, cpp_reader *reader, bool located) const
16447 {
16448 unsigned count = sec.u ();
16449 /* We rely on knowing cpp_reader's hash table is ident_hash, and
16450 it's subobject allocator is stringpool_ggc_alloc and that is just
16451 a wrapper for ggc_alloc_atomic. */
16452 cpp_macro *macro
16453 = (cpp_macro *)ggc_alloc_atomic (sizeof (cpp_macro)
16454 + sizeof (cpp_token) * (count - !!count));
16455 memset (macro, 0, sizeof (cpp_macro) + sizeof (cpp_token) * (count - !!count));
16456
16457 macro->count = count;
16458 macro->kind = cmk_macro;
16459 macro->imported_p = true;
16460
16461 macro->fun_like = sec.b ();
16462 macro->variadic = sec.b ();
16463 macro->syshdr = sec.b ();
16464 sec.bflush ();
16465
16466 macro->line = located ? read_location (sec) : loc;
16467
16468 if (macro->fun_like)
16469 {
16470 unsigned paramc = sec.u ();
16471 cpp_hashnode **params
16472 = (cpp_hashnode **)ggc_alloc_atomic (sizeof (cpp_hashnode *) * paramc);
16473 macro->paramc = paramc;
16474 macro->parm.params = params;
16475 for (unsigned ix = 0; ix != paramc; ix++)
16476 params[ix] = sec.cpp_node ();
16477 }
16478
16479 unsigned len = 0;
16480 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
16481 {
16482 cpp_token *token = &macro->exp.tokens[ix];
16483 token->src_loc = located ? read_location (sec) : loc;
16484 token->type = cpp_ttype (sec.u ());
16485 token->flags = sec.u ();
16486 switch (cpp_token_val_index (token))
16487 {
16488 default:
16489 sec.set_overrun ();
16490 break;
16491
16492 case CPP_TOKEN_FLD_ARG_NO:
16493 /* An argument reference. */
16494 {
16495 unsigned arg_no = sec.u ();
16496 if (arg_no - 1 >= macro->paramc)
16497 sec.set_overrun ();
16498 token->val.macro_arg.arg_no = arg_no;
16499 token->val.macro_arg.spelling = sec.cpp_node ();
16500 }
16501 break;
16502
16503 case CPP_TOKEN_FLD_NODE:
16504 /* An identifier. */
16505 token->val.node.node = sec.cpp_node ();
16506 token->val.node.spelling = sec.cpp_node ();
16507 if (!token->val.node.spelling)
16508 token->val.node.spelling = token->val.node.node;
16509 break;
16510
16511 case CPP_TOKEN_FLD_NONE:
16512 break;
16513
16514 case CPP_TOKEN_FLD_STR:
16515 /* A string, number or comment. */
16516 token->val.str.len = sec.u ();
16517 len += token->val.str.len + 1;
16518 break;
16519 }
16520 }
16521
16522 if (len)
16523 if (const char *ptr = reinterpret_cast<const char *> (sec.buf (len)))
16524 {
16525 /* There should be a final NUL. */
16526 if (ptr[len-1])
16527 sec.set_overrun ();
16528 /* cpp_alloc_token_string will add a final NUL. */
16529 const unsigned char *buf
16530 = cpp_alloc_token_string (reader, (const unsigned char *)ptr, len - 1);
16531 len = 0;
16532 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
16533 {
16534 cpp_token *token = &macro->exp.tokens[ix];
16535 if (cpp_token_val_index (token) == CPP_TOKEN_FLD_STR)
16536 {
16537 token->val.str.text = buf + len;
16538 len += token->val.str.len;
16539 if (buf[len++])
16540 sec.set_overrun ();
16541 }
16542 }
16543 }
16544
16545 if (sec.get_overrun ())
16546 return NULL;
16547 return macro;
16548 }
16549
16550 /* Exported macro data. */
16551 struct macro_export {
16552 cpp_macro *def;
16553 location_t undef_loc;
16554
16555 macro_export ()
16556 :def (NULL), undef_loc (UNKNOWN_LOCATION)
16557 {
16558 }
16559 };
16560
16561 /* Imported macro data. */
16562 class macro_import {
16563 public:
16564 struct slot {
16565 #if defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8
16566 int offset;
16567 #endif
16568 /* We need to ensure we don't use the LSB for representation, as
16569 that's the union discriminator below. */
16570 unsigned bits;
16571
16572 #if !(defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8)
16573 int offset;
16574 #endif
16575
16576 public:
16577 enum Layout {
16578 L_DEF = 1,
16579 L_UNDEF = 2,
16580 L_BOTH = 3,
16581 L_MODULE_SHIFT = 2
16582 };
16583
16584 public:
16585 /* Not a regular ctor, because we put it in a union, and that's
16586 not allowed in C++ 98. */
16587 static slot ctor (unsigned module, unsigned defness)
16588 {
16589 gcc_checking_assert (defness);
16590 slot s;
16591 s.bits = defness | (module << L_MODULE_SHIFT);
16592 s.offset = -1;
16593 return s;
16594 }
16595
16596 public:
16597 unsigned get_defness () const
16598 {
16599 return bits & L_BOTH;
16600 }
16601 unsigned get_module () const
16602 {
16603 return bits >> L_MODULE_SHIFT;
16604 }
16605 void become_undef ()
16606 {
16607 bits &= ~unsigned (L_DEF);
16608 bits |= unsigned (L_UNDEF);
16609 }
16610 };
16611
16612 private:
16613 typedef vec<slot, va_heap, vl_embed> ary_t;
16614 union either {
16615 /* Discriminated by bits 0|1 != 0. The expected case is that
16616 there will be exactly one slot per macro, hence the effort of
16617 packing that. */
16618 ary_t *ary;
16619 slot single;
16620 } u;
16621
16622 public:
16623 macro_import ()
16624 {
16625 u.ary = NULL;
16626 }
16627
16628 private:
16629 bool single_p () const
16630 {
16631 return u.single.bits & slot::L_BOTH;
16632 }
16633 bool occupied_p () const
16634 {
16635 return u.ary != NULL;
16636 }
16637
16638 public:
16639 unsigned length () const
16640 {
16641 gcc_checking_assert (occupied_p ());
16642 return single_p () ? 1 : u.ary->length ();
16643 }
16644 slot &operator[] (unsigned ix)
16645 {
16646 gcc_checking_assert (occupied_p ());
16647 if (single_p ())
16648 {
16649 gcc_checking_assert (!ix);
16650 return u.single;
16651 }
16652 else
16653 return (*u.ary)[ix];
16654 }
16655
16656 public:
16657 slot &exported ();
16658 slot &append (unsigned module, unsigned defness);
16659 };
16660
16661 /* O is a new import to append to the list for. If we're an empty
16662 set, initialize us. */
16663
16664 macro_import::slot &
16665 macro_import::append (unsigned module, unsigned defness)
16666 {
16667 if (!occupied_p ())
16668 {
16669 u.single = slot::ctor (module, defness);
16670 return u.single;
16671 }
16672 else
16673 {
16674 bool single = single_p ();
16675 ary_t *m = single ? NULL : u.ary;
16676 vec_safe_reserve (m, 1 + single);
16677 if (single)
16678 m->quick_push (u.single);
16679 u.ary = m;
16680 return *u.ary->quick_push (slot::ctor (module, defness));
16681 }
16682 }
16683
16684 /* We're going to export something. Make sure the first import slot
16685 is us. */
16686
16687 macro_import::slot &
16688 macro_import::exported ()
16689 {
16690 if (occupied_p () && !(*this)[0].get_module ())
16691 {
16692 slot &res = (*this)[0];
16693 res.bits |= slot::L_DEF;
16694 return res;
16695 }
16696
16697 slot *a = &append (0, slot::L_DEF);
16698 if (!single_p ())
16699 {
16700 slot &f = (*this)[0];
16701 std::swap (f, *a);
16702 a = &f;
16703 }
16704 return *a;
16705 }
16706
16707 /* The import (&exported) macros. cpp_hasnode's deferred field
16708 indexes this array (offset by 1, so zero means 'not present'. */
16709
16710 static vec<macro_import, va_heap, vl_embed> *macro_imports;
16711
16712 /* The exported macros. A macro_import slot's zeroth element's offset
16713 indexes this array. If the zeroth slot is not for module zero,
16714 there is no export. */
16715
16716 static vec<macro_export, va_heap, vl_embed> *macro_exports;
16717
16718 /* The reachable set of header imports from this TU. */
16719
16720 static GTY(()) bitmap headers;
16721
16722 /* Get the (possibly empty) macro imports for NODE. */
16723
16724 static macro_import &
16725 get_macro_imports (cpp_hashnode *node)
16726 {
16727 if (node->deferred)
16728 return (*macro_imports)[node->deferred - 1];
16729
16730 vec_safe_reserve (macro_imports, 1);
16731 node->deferred = macro_imports->length () + 1;
16732 return *vec_safe_push (macro_imports, macro_import ());
16733 }
16734
16735 /* Get the macro export for export EXP of NODE. */
16736
16737 static macro_export &
16738 get_macro_export (macro_import::slot &slot)
16739 {
16740 if (slot.offset >= 0)
16741 return (*macro_exports)[slot.offset];
16742
16743 vec_safe_reserve (macro_exports, 1);
16744 slot.offset = macro_exports->length ();
16745 return *macro_exports->quick_push (macro_export ());
16746 }
16747
16748 /* If NODE is an exportable macro, add it to the export set. */
16749
16750 static int
16751 maybe_add_macro (cpp_reader *, cpp_hashnode *node, void *data_)
16752 {
16753 bool exporting = false;
16754
16755 if (cpp_user_macro_p (node))
16756 if (cpp_macro *macro = node->value.macro)
16757 /* Ignore imported, builtins, command line and forced header macros. */
16758 if (!macro->imported_p
16759 && !macro->lazy && macro->line >= spans.main_start ())
16760 {
16761 gcc_checking_assert (macro->kind == cmk_macro);
16762 /* I don't want to deal with this corner case, that I suspect is
16763 a devil's advocate reading of the standard. */
16764 gcc_checking_assert (!macro->extra_tokens);
16765
16766 macro_import::slot &slot = get_macro_imports (node).exported ();
16767 macro_export &exp = get_macro_export (slot);
16768 exp.def = macro;
16769 exporting = true;
16770 }
16771
16772 if (!exporting && node->deferred)
16773 {
16774 macro_import &imports = (*macro_imports)[node->deferred - 1];
16775 macro_import::slot &slot = imports[0];
16776 if (!slot.get_module ())
16777 {
16778 gcc_checking_assert (slot.get_defness ());
16779 exporting = true;
16780 }
16781 }
16782
16783 if (exporting)
16784 static_cast<vec<cpp_hashnode *> *> (data_)->safe_push (node);
16785
16786 return 1; /* Don't stop. */
16787 }
16788
16789 /* Order cpp_hashnodes A_ and B_ by their exported macro locations. */
16790
16791 static int
16792 macro_loc_cmp (const void *a_, const void *b_)
16793 {
16794 const cpp_hashnode *node_a = *(const cpp_hashnode *const *)a_;
16795 macro_import &import_a = (*macro_imports)[node_a->deferred - 1];
16796 const macro_export &export_a = (*macro_exports)[import_a[0].offset];
16797 location_t loc_a = export_a.def ? export_a.def->line : export_a.undef_loc;
16798
16799 const cpp_hashnode *node_b = *(const cpp_hashnode *const *)b_;
16800 macro_import &import_b = (*macro_imports)[node_b->deferred - 1];
16801 const macro_export &export_b = (*macro_exports)[import_b[0].offset];
16802 location_t loc_b = export_b.def ? export_b.def->line : export_b.undef_loc;
16803
16804 if (loc_a < loc_b)
16805 return +1;
16806 else if (loc_a > loc_b)
16807 return -1;
16808 else
16809 return 0;
16810 }
16811
16812 /* Write out the exported defines. This is two sections, one
16813 containing the definitions, the other a table of node names. */
16814
16815 unsigned
16816 module_state::write_macros (elf_out *to, cpp_reader *reader, unsigned *crc_p)
16817 {
16818 dump () && dump ("Writing macros");
16819 dump.indent ();
16820
16821 vec<cpp_hashnode *> macros;
16822 macros.create (100);
16823 cpp_forall_identifiers (reader, maybe_add_macro, &macros);
16824
16825 dump (dumper::MACRO) && dump ("No more than %u macros", macros.length ());
16826
16827 macros.qsort (macro_loc_cmp);
16828
16829 /* Write the defs */
16830 bytes_out sec (to);
16831 sec.begin ();
16832
16833 unsigned count = 0;
16834 for (unsigned ix = macros.length (); ix--;)
16835 {
16836 cpp_hashnode *node = macros[ix];
16837 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
16838 gcc_assert (!slot.get_module () && slot.get_defness ());
16839
16840 macro_export &mac = (*macro_exports)[slot.offset];
16841 gcc_assert (!!(slot.get_defness () & macro_import::slot::L_UNDEF)
16842 == (mac.undef_loc != UNKNOWN_LOCATION)
16843 && !!(slot.get_defness () & macro_import::slot::L_DEF)
16844 == (mac.def != NULL));
16845
16846 if (IDENTIFIER_KEYWORD_P (identifier (node)))
16847 {
16848 warning_at (mac.def->line, 0,
16849 "not exporting %<#define %E%> as it is a keyword",
16850 identifier (node));
16851 slot.offset = 0;
16852 continue;
16853 }
16854
16855 count++;
16856 slot.offset = sec.pos;
16857 dump (dumper::MACRO)
16858 && dump ("Writing macro %s%s%s %I at %u",
16859 slot.get_defness () & macro_import::slot::L_UNDEF
16860 ? "#undef" : "",
16861 slot.get_defness () == macro_import::slot::L_BOTH
16862 ? " & " : "",
16863 slot.get_defness () & macro_import::slot::L_DEF
16864 ? "#define" : "",
16865 identifier (node), slot.offset);
16866 if (mac.undef_loc != UNKNOWN_LOCATION)
16867 write_location (sec, mac.undef_loc);
16868 if (mac.def)
16869 write_define (sec, mac.def);
16870 }
16871 sec.end (to, to->name (MOD_SNAME_PFX ".def"), crc_p);
16872
16873 if (count)
16874 {
16875 /* Write the table. */
16876 bytes_out sec (to);
16877 sec.begin ();
16878 sec.u (count);
16879
16880 for (unsigned ix = macros.length (); ix--;)
16881 {
16882 const cpp_hashnode *node = macros[ix];
16883 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
16884
16885 if (slot.offset)
16886 {
16887 sec.cpp_node (node);
16888 sec.u (slot.get_defness ());
16889 sec.u (slot.offset);
16890 }
16891 }
16892 sec.end (to, to->name (MOD_SNAME_PFX ".mac"), crc_p);
16893 }
16894
16895 macros.release ();
16896 dump.outdent ();
16897 return count;
16898 }
16899
16900 bool
16901 module_state::read_macros ()
16902 {
16903 /* Get the def section. */
16904 if (!slurp->macro_defs.begin (loc, from (), MOD_SNAME_PFX ".def"))
16905 return false;
16906
16907 /* Get the tbl section, if there are defs. */
16908 if (slurp->macro_defs.more_p ()
16909 && !slurp->macro_tbl.begin (loc, from (), MOD_SNAME_PFX ".mac"))
16910 return false;
16911
16912 return true;
16913 }
16914
16915 /* Install the macro name table. */
16916
16917 void
16918 module_state::install_macros ()
16919 {
16920 bytes_in &sec = slurp->macro_tbl;
16921 if (!sec.size)
16922 return;
16923
16924 dump () && dump ("Reading macro table %M", this);
16925 dump.indent ();
16926
16927 unsigned count = sec.u ();
16928 dump () && dump ("%u macros", count);
16929 while (count--)
16930 {
16931 cpp_hashnode *node = sec.cpp_node ();
16932 macro_import &imp = get_macro_imports (node);
16933 unsigned flags = sec.u () & macro_import::slot::L_BOTH;
16934 if (!flags)
16935 sec.set_overrun ();
16936
16937 if (sec.get_overrun ())
16938 break;
16939
16940 macro_import::slot &slot = imp.append (mod, flags);
16941 slot.offset = sec.u ();
16942
16943 dump (dumper::MACRO)
16944 && dump ("Read %s macro %s%s%s %I at %u",
16945 imp.length () > 1 ? "add" : "new",
16946 flags & macro_import::slot::L_UNDEF ? "#undef" : "",
16947 flags == macro_import::slot::L_BOTH ? " & " : "",
16948 flags & macro_import::slot::L_DEF ? "#define" : "",
16949 identifier (node), slot.offset);
16950
16951 /* We'll leak an imported definition's TOKEN_FLD_STR's data
16952 here. But that only happens when we've had to resolve the
16953 deferred macro before this import -- why are you doing
16954 that? */
16955 if (cpp_macro *cur = cpp_set_deferred_macro (node))
16956 if (!cur->imported_p)
16957 {
16958 macro_import::slot &slot = imp.exported ();
16959 macro_export &exp = get_macro_export (slot);
16960 exp.def = cur;
16961 dump (dumper::MACRO)
16962 && dump ("Saving current #define %I", identifier (node));
16963 }
16964 }
16965
16966 /* We're now done with the table. */
16967 elf_in::release (slurp->from, sec);
16968
16969 dump.outdent ();
16970 }
16971
16972 /* Import the transitive macros. */
16973
16974 void
16975 module_state::import_macros ()
16976 {
16977 bitmap_ior_into (headers, slurp->headers);
16978
16979 bitmap_iterator bititer;
16980 unsigned bitnum;
16981 EXECUTE_IF_SET_IN_BITMAP (slurp->headers, 0, bitnum, bititer)
16982 (*modules)[bitnum]->install_macros ();
16983 }
16984
16985 /* NODE is being undefined at LOC. Record it in the export table, if
16986 necessary. */
16987
16988 void
16989 module_state::undef_macro (cpp_reader *, location_t loc, cpp_hashnode *node)
16990 {
16991 if (!node->deferred)
16992 /* The macro is not imported, so our undef is irrelevant. */
16993 return;
16994
16995 unsigned n = dump.push (NULL);
16996
16997 macro_import::slot &slot = (*macro_imports)[node->deferred - 1].exported ();
16998 macro_export &exp = get_macro_export (slot);
16999
17000 exp.undef_loc = loc;
17001 slot.become_undef ();
17002 exp.def = NULL;
17003
17004 dump (dumper::MACRO) && dump ("Recording macro #undef %I", identifier (node));
17005
17006 dump.pop (n);
17007 }
17008
17009 /* NODE is a deferred macro node. Determine the definition and return
17010 it, with NULL if undefined. May issue diagnostics.
17011
17012 This can leak memory, when merging declarations -- the string
17013 contents (TOKEN_FLD_STR) of each definition are allocated in
17014 unreclaimable cpp objstack. Only one will win. However, I do not
17015 expect this to be common -- mostly macros have a single point of
17016 definition. Perhaps we could restore the objstack to its position
17017 after the first imported definition (if that wins)? The macros
17018 themselves are GC'd. */
17019
17020 cpp_macro *
17021 module_state::deferred_macro (cpp_reader *reader, location_t loc,
17022 cpp_hashnode *node)
17023 {
17024 macro_import &imports = (*macro_imports)[node->deferred - 1];
17025
17026 unsigned n = dump.push (NULL);
17027 dump (dumper::MACRO) && dump ("Deferred macro %I", identifier (node));
17028
17029 bitmap visible (BITMAP_GGC_ALLOC ());
17030
17031 if (!((imports[0].get_defness () & macro_import::slot::L_UNDEF)
17032 && !imports[0].get_module ()))
17033 {
17034 /* Calculate the set of visible header imports. */
17035 bitmap_copy (visible, headers);
17036 for (unsigned ix = imports.length (); ix--;)
17037 {
17038 const macro_import::slot &slot = imports[ix];
17039 unsigned mod = slot.get_module ();
17040 if ((slot.get_defness () & macro_import::slot::L_UNDEF)
17041 && bitmap_bit_p (visible, mod))
17042 {
17043 bitmap arg = mod ? (*modules)[mod]->slurp->headers : headers;
17044 bitmap_and_compl_into (visible, arg);
17045 bitmap_set_bit (visible, mod);
17046 }
17047 }
17048 }
17049 bitmap_set_bit (visible, 0);
17050
17051 /* Now find the macros that are still visible. */
17052 bool failed = false;
17053 cpp_macro *def = NULL;
17054 vec<macro_export> defs;
17055 defs.create (imports.length ());
17056 for (unsigned ix = imports.length (); ix--;)
17057 {
17058 const macro_import::slot &slot = imports[ix];
17059 unsigned mod = slot.get_module ();
17060 if (bitmap_bit_p (visible, mod))
17061 {
17062 macro_export *pushed = NULL;
17063 if (mod)
17064 {
17065 const module_state *imp = (*modules)[mod];
17066 bytes_in &sec = imp->slurp->macro_defs;
17067 if (!sec.get_overrun ())
17068 {
17069 dump (dumper::MACRO)
17070 && dump ("Reading macro %s%s%s %I module %M at %u",
17071 slot.get_defness () & macro_import::slot::L_UNDEF
17072 ? "#undef" : "",
17073 slot.get_defness () == macro_import::slot::L_BOTH
17074 ? " & " : "",
17075 slot.get_defness () & macro_import::slot::L_DEF
17076 ? "#define" : "",
17077 identifier (node), imp, slot.offset);
17078 sec.random_access (slot.offset);
17079
17080 macro_export exp;
17081 if (slot.get_defness () & macro_import::slot::L_UNDEF)
17082 exp.undef_loc = imp->read_location (sec);
17083 if (slot.get_defness () & macro_import::slot::L_DEF)
17084 exp.def = imp->read_define (sec, reader);
17085 if (sec.get_overrun ())
17086 error_at (loc, "macro definitions of %qE corrupted",
17087 imp->name);
17088 else
17089 pushed = defs.quick_push (exp);
17090 }
17091 }
17092 else
17093 pushed = defs.quick_push ((*macro_exports)[slot.offset]);
17094 if (pushed && pushed->def)
17095 {
17096 if (!def)
17097 def = pushed->def;
17098 else if (cpp_compare_macros (def, pushed->def))
17099 failed = true;
17100 }
17101 }
17102 }
17103
17104 if (failed)
17105 {
17106 /* If LOC is the first loc, this is the end of file check, which
17107 is a warning. */
17108 if (loc == MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0)))
17109 warning_at (loc, OPT_Winvalid_imported_macros,
17110 "inconsistent imported macro definition %qE",
17111 identifier (node));
17112 else
17113 error_at (loc, "inconsistent imported macro definition %qE",
17114 identifier (node));
17115 for (unsigned ix = defs.length (); ix--;)
17116 {
17117 macro_export &exp = defs[ix];
17118 if (exp.undef_loc)
17119 inform (exp.undef_loc, "%<#undef %E%>", identifier (node));
17120 if (exp.def)
17121 inform (exp.def->line, "%<#define %s%>",
17122 cpp_macro_definition (reader, node, exp.def));
17123 }
17124 def = NULL;
17125 }
17126
17127 defs.release ();
17128
17129 dump.pop (n);
17130
17131 return def;
17132 }
17133
17134 /* Stream the static aggregates. Sadly some headers (ahem:
17135 iostream) contain static vars, and rely on them to run global
17136 ctors. */
17137 unsigned
17138 module_state::write_inits (elf_out *to, depset::hash &table, unsigned *crc_ptr)
17139 {
17140 if (!static_aggregates && !tls_aggregates)
17141 return 0;
17142
17143 dump () && dump ("Writing initializers");
17144 dump.indent ();
17145
17146 static_aggregates = nreverse (static_aggregates);
17147 tls_aggregates = nreverse (tls_aggregates);
17148
17149 unsigned count = 0;
17150 trees_out sec (to, this, table, ~0u);
17151 sec.begin ();
17152
17153 tree list = static_aggregates;
17154 for (int passes = 0; passes != 2; passes++)
17155 {
17156 for (tree init = list; init; init = TREE_CHAIN (init), count++)
17157 if (TREE_LANG_FLAG_0 (init))
17158 {
17159 tree decl = TREE_VALUE (init);
17160
17161 dump ("Initializer:%u for %N", count, decl);
17162 sec.tree_node (decl);
17163 }
17164
17165 list = tls_aggregates;
17166 }
17167
17168 sec.end (to, to->name (MOD_SNAME_PFX ".ini"), crc_ptr);
17169 dump.outdent ();
17170
17171 return count;
17172 }
17173
17174 bool
17175 module_state::read_inits (unsigned count)
17176 {
17177 trees_in sec (this);
17178 if (!sec.begin (loc, from (), from ()->find (MOD_SNAME_PFX ".ini")))
17179 return false;
17180 dump () && dump ("Reading %u initializers", count);
17181 dump.indent ();
17182
17183 for (unsigned ix = 0; ix != count; ix++)
17184 {
17185 /* Merely referencing the decl causes its initializer to be read
17186 and added to the correct list. */
17187 tree decl = sec.tree_node ();
17188
17189 if (sec.get_overrun ())
17190 break;
17191 if (decl)
17192 dump ("Initializer:%u for %N", count, decl);
17193 }
17194 dump.outdent ();
17195 if (!sec.end (from ()))
17196 return false;
17197 return true;
17198 }
17199
17200 void
17201 module_state::write_counts (elf_out *to, unsigned counts[MSC_HWM],
17202 unsigned *crc_ptr)
17203 {
17204 bytes_out cfg (to);
17205
17206 cfg.begin ();
17207
17208 for (unsigned ix = MSC_HWM; ix--;)
17209 cfg.u (counts[ix]);
17210
17211 if (dump ())
17212 {
17213 dump ("Cluster sections are [%u,%u)",
17214 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17215 dump ("Bindings %u", counts[MSC_bindings]);
17216 dump ("Pendings %u", counts[MSC_pendings]);
17217 dump ("Entities %u", counts[MSC_entities]);
17218 dump ("Namespaces %u", counts[MSC_namespaces]);
17219 dump ("Macros %u", counts[MSC_macros]);
17220 dump ("Initializers %u", counts[MSC_inits]);
17221 }
17222
17223 cfg.end (to, to->name (MOD_SNAME_PFX ".cnt"), crc_ptr);
17224 }
17225
17226 bool
17227 module_state::read_counts (unsigned counts[MSC_HWM])
17228 {
17229 bytes_in cfg;
17230
17231 if (!cfg.begin (loc, from (), MOD_SNAME_PFX ".cnt"))
17232 return false;
17233
17234 for (unsigned ix = MSC_HWM; ix--;)
17235 counts[ix] = cfg.u ();
17236
17237 if (dump ())
17238 {
17239 dump ("Declaration sections are [%u,%u)",
17240 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17241 dump ("Bindings %u", counts[MSC_bindings]);
17242 dump ("Pendings %u", counts[MSC_pendings]);
17243 dump ("Entities %u", counts[MSC_entities]);
17244 dump ("Namespaces %u", counts[MSC_namespaces]);
17245 dump ("Macros %u", counts[MSC_macros]);
17246 dump ("Initializers %u", counts[MSC_inits]);
17247 }
17248
17249 return cfg.end (from ());
17250 }
17251
17252 /* Tool configuration: MOD_SNAME_PFX .config
17253
17254 This is data that confirms current state (or fails). */
17255
17256 void
17257 module_state::write_config (elf_out *to, module_state_config &config,
17258 unsigned inner_crc)
17259 {
17260 bytes_out cfg (to);
17261
17262 cfg.begin ();
17263
17264 /* Write version and inner crc as u32 values, for easier
17265 debug inspection. */
17266 dump () && dump ("Writing version=%V, inner_crc=%x",
17267 MODULE_VERSION, inner_crc);
17268 cfg.u32 (unsigned (MODULE_VERSION));
17269 cfg.u32 (inner_crc);
17270
17271 cfg.u (to->name (is_header () ? "" : get_flatname ()));
17272
17273 /* Configuration. */
17274 dump () && dump ("Writing target='%s', host='%s'",
17275 TARGET_MACHINE, HOST_MACHINE);
17276 unsigned target = to->name (TARGET_MACHINE);
17277 unsigned host = (!strcmp (TARGET_MACHINE, HOST_MACHINE)
17278 ? target : to->name (HOST_MACHINE));
17279 cfg.u (target);
17280 cfg.u (host);
17281
17282 cfg.str (config.dialect_str);
17283 cfg.u (extensions);
17284
17285 /* Global tree information. We write the globals crc separately,
17286 rather than mix it directly into the overall crc, as it is used
17287 to ensure data match between instances of the compiler, not
17288 integrity of the file. */
17289 dump () && dump ("Writing globals=%u, crc=%x",
17290 fixed_trees->length (), global_crc);
17291 cfg.u (fixed_trees->length ());
17292 cfg.u32 (global_crc);
17293
17294 if (is_partition ())
17295 cfg.u (is_interface ());
17296
17297 cfg.u (config.num_imports);
17298 cfg.u (config.num_partitions);
17299
17300 cfg.u (config.ordinary_locs);
17301 cfg.u (config.macro_locs);
17302 cfg.u (config.ordinary_loc_align);
17303
17304 /* Now generate CRC, we'll have incorporated the inner CRC because
17305 of its serialization above. */
17306 cfg.end (to, to->name (MOD_SNAME_PFX ".cfg"), &crc);
17307 dump () && dump ("Writing CRC=%x", crc);
17308 }
17309
17310 void
17311 module_state::note_cmi_name ()
17312 {
17313 if (!cmi_noted_p && filename)
17314 {
17315 cmi_noted_p = true;
17316 inform (loc, "compiled module file is %qs",
17317 maybe_add_cmi_prefix (filename));
17318 }
17319 }
17320
17321 bool
17322 module_state::read_config (module_state_config &config)
17323 {
17324 bytes_in cfg;
17325
17326 if (!cfg.begin (loc, from (), MOD_SNAME_PFX ".cfg"))
17327 return false;
17328
17329 /* Check version. */
17330 unsigned my_ver = MODULE_VERSION;
17331 unsigned their_ver = cfg.u32 ();
17332 dump () && dump (my_ver == their_ver ? "Version %V"
17333 : "Expecting %V found %V", my_ver, their_ver);
17334 if (their_ver != my_ver)
17335 {
17336 /* The compiler versions differ. Close enough? */
17337 verstr_t my_string, their_string;
17338
17339 version2string (my_ver, my_string);
17340 version2string (their_ver, their_string);
17341
17342 /* Reject when either is non-experimental or when experimental
17343 major versions differ. */
17344 bool reject_p = ((!IS_EXPERIMENTAL (my_ver)
17345 || !IS_EXPERIMENTAL (their_ver)
17346 || MODULE_MAJOR (my_ver) != MODULE_MAJOR (their_ver))
17347 /* The 'I know what I'm doing' switch. */
17348 && !flag_module_version_ignore);
17349 bool inform_p = true;
17350 if (reject_p)
17351 {
17352 cfg.set_overrun ();
17353 error_at (loc, "compiled module is %sversion %s",
17354 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
17355 their_string);
17356 }
17357 else
17358 inform_p = warning_at (loc, 0, "compiled module is %sversion %s",
17359 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
17360 their_string);
17361
17362 if (inform_p)
17363 {
17364 inform (loc, "compiler is %sversion %s%s%s",
17365 IS_EXPERIMENTAL (my_ver) ? "experimental " : "",
17366 my_string,
17367 reject_p ? "" : flag_module_version_ignore
17368 ? ", be it on your own head!" : ", close enough?",
17369 reject_p ? "" : " \xc2\xaf\\_(\xe3\x83\x84)_/\xc2\xaf");
17370 note_cmi_name ();
17371 }
17372
17373 if (reject_p)
17374 goto done;
17375 }
17376
17377 /* We wrote the inner crc merely to merge it, so simply read it
17378 back and forget it. */
17379 cfg.u32 ();
17380
17381 /* Check module name. */
17382 {
17383 const char *their_name = from ()->name (cfg.u ());
17384 const char *our_name = "";
17385
17386 if (!is_header ())
17387 our_name = get_flatname ();
17388
17389 /* Header units can be aliased, so name checking is
17390 inappropriate. */
17391 if (0 != strcmp (their_name, our_name))
17392 {
17393 error_at (loc,
17394 their_name[0] && our_name[0] ? G_("module %qs found")
17395 : their_name[0]
17396 ? G_("header module expected, module %qs found")
17397 : G_("module %qs expected, header module found"),
17398 their_name[0] ? their_name : our_name);
17399 cfg.set_overrun ();
17400 goto done;
17401 }
17402 }
17403
17404 /* Check the CRC after the above sanity checks, so that the user is
17405 clued in. */
17406 {
17407 unsigned e_crc = crc;
17408 crc = cfg.get_crc ();
17409 dump () && dump ("Reading CRC=%x", crc);
17410 if (!is_direct () && crc != e_crc)
17411 {
17412 error_at (loc, "module %qs CRC mismatch", get_flatname ());
17413 cfg.set_overrun ();
17414 goto done;
17415 }
17416 }
17417
17418 /* Check target & host. */
17419 {
17420 const char *their_target = from ()->name (cfg.u ());
17421 const char *their_host = from ()->name (cfg.u ());
17422 dump () && dump ("Read target='%s', host='%s'", their_target, their_host);
17423 if (strcmp (their_target, TARGET_MACHINE)
17424 || strcmp (their_host, HOST_MACHINE))
17425 {
17426 error_at (loc, "target & host is %qs:%qs, expected %qs:%qs",
17427 their_target, TARGET_MACHINE, their_host, HOST_MACHINE);
17428 cfg.set_overrun ();
17429 goto done;
17430 }
17431 }
17432
17433 /* Check compilation dialect. This must match. */
17434 {
17435 const char *their_dialect = cfg.str ();
17436 if (strcmp (their_dialect, config.dialect_str))
17437 {
17438 error_at (loc, "language dialect differs %qs, expected %qs",
17439 their_dialect, config.dialect_str);
17440 cfg.set_overrun ();
17441 goto done;
17442 }
17443 }
17444
17445 /* Check for extensions. If they set any, we must have them set
17446 too. */
17447 {
17448 unsigned ext = cfg.u ();
17449 unsigned allowed = (flag_openmp ? SE_OPENMP : 0);
17450
17451 if (unsigned bad = ext & ~allowed)
17452 {
17453 if (bad & SE_OPENMP)
17454 error_at (loc, "module contains OpenMP, use %<-fopenmp%> to enable");
17455 cfg.set_overrun ();
17456 goto done;
17457 }
17458 extensions = ext;
17459 }
17460
17461 /* Check global trees. */
17462 {
17463 unsigned their_fixed_length = cfg.u ();
17464 unsigned their_fixed_crc = cfg.u32 ();
17465 dump () && dump ("Read globals=%u, crc=%x",
17466 their_fixed_length, their_fixed_crc);
17467 if (!flag_preprocess_only
17468 && (their_fixed_length != fixed_trees->length ()
17469 || their_fixed_crc != global_crc))
17470 {
17471 error_at (loc, "fixed tree mismatch");
17472 cfg.set_overrun ();
17473 goto done;
17474 }
17475 }
17476
17477 /* All non-partitions are interfaces. */
17478 interface_p = !is_partition () || cfg.u ();
17479
17480 config.num_imports = cfg.u ();
17481 config.num_partitions = cfg.u ();
17482
17483 config.ordinary_locs = cfg.u ();
17484 config.macro_locs = cfg.u ();
17485 config.ordinary_loc_align = cfg.u ();
17486
17487 done:
17488 return cfg.end (from ());
17489 }
17490
17491 /* Use ELROND format to record the following sections:
17492 qualified-names : binding value(s)
17493 MOD_SNAME_PFX.README : human readable, strings
17494 MOD_SNAME_PFX.ENV : environment strings, strings
17495 MOD_SNAME_PFX.nms : namespace hierarchy
17496 MOD_SNAME_PFX.bnd : binding table
17497 MOD_SNAME_PFX.spc : specialization table
17498 MOD_SNAME_PFX.imp : import table
17499 MOD_SNAME_PFX.ent : entity table
17500 MOD_SNAME_PFX.prt : partitions table
17501 MOD_SNAME_PFX.olm : ordinary line maps
17502 MOD_SNAME_PFX.mlm : macro line maps
17503 MOD_SNAME_PFX.def : macro definitions
17504 MOD_SNAME_PFX.mac : macro index
17505 MOD_SNAME_PFX.ini : inits
17506 MOD_SNAME_PFX.cnt : counts
17507 MOD_SNAME_PFX.cfg : config data
17508 */
17509
17510 void
17511 module_state::write (elf_out *to, cpp_reader *reader)
17512 {
17513 /* Figure out remapped module numbers, which might elide
17514 partitions. */
17515 bitmap partitions = NULL;
17516 if (!is_header () && !is_partition ())
17517 partitions = BITMAP_GGC_ALLOC ();
17518
17519 unsigned mod_hwm = 1;
17520 for (unsigned ix = 1; ix != modules->length (); ix++)
17521 {
17522 module_state *imp = (*modules)[ix];
17523
17524 /* Promote any non-partition direct import from a partition, unless
17525 we're a partition. */
17526 if (!is_partition () && !imp->is_partition ()
17527 && imp->is_partition_direct ())
17528 imp->directness = MD_PURVIEW_DIRECT;
17529
17530 /* Write any import that is not a partition, unless we're a
17531 partition. */
17532 if (!partitions || !imp->is_partition ())
17533 imp->remap = mod_hwm++;
17534 else
17535 {
17536 dump () && dump ("Partition %M %u", imp, ix);
17537 bitmap_set_bit (partitions, ix);
17538 imp->remap = 0;
17539 /* All interface partitions must be exported. */
17540 if (imp->is_interface () && !bitmap_bit_p (exports, imp->mod))
17541 {
17542 error_at (imp->loc, "interface partition is not exported");
17543 bitmap_set_bit (exports, imp->mod);
17544 }
17545
17546 /* All the partition entities should have been loaded when
17547 loading the partition. */
17548 if (CHECKING_P)
17549 for (unsigned jx = 0; jx != imp->entity_num; jx++)
17550 {
17551 binding_slot *slot = &(*entity_ary)[imp->entity_lwm + jx];
17552 gcc_checking_assert (!slot->is_lazy ());
17553 }
17554 }
17555 }
17556
17557 if (partitions && bitmap_empty_p (partitions))
17558 /* No partitions present. */
17559 partitions = nullptr;
17560
17561 /* Find the set of decls we must write out. */
17562 depset::hash table (DECL_NAMESPACE_BINDINGS (global_namespace)->size () * 8);
17563 /* Add the specializations before the writables, so that we can
17564 detect injected friend specializations. */
17565 table.add_specializations (true);
17566 table.add_specializations (false);
17567 if (partial_specializations)
17568 {
17569 table.add_partial_entities (partial_specializations);
17570 partial_specializations = NULL;
17571 }
17572 table.add_namespace_entities (global_namespace, partitions);
17573 if (class_members)
17574 {
17575 table.add_class_entities (class_members);
17576 class_members = NULL;
17577 }
17578
17579 /* Now join everything up. */
17580 table.find_dependencies (this);
17581
17582 if (!table.finalize_dependencies ())
17583 {
17584 to->set_error ();
17585 return;
17586 }
17587
17588 #if CHECKING_P
17589 /* We're done verifying at-most once reading, reset to verify
17590 at-most once writing. */
17591 note_defs = note_defs_table_t::create_ggc (1000);
17592 #endif
17593
17594 /* Determine Strongy Connected Components. */
17595 vec<depset *> sccs = table.connect ();
17596
17597 unsigned crc = 0;
17598 module_state_config config;
17599 location_map_info map_info = write_prepare_maps (&config);
17600 unsigned counts[MSC_HWM];
17601
17602 config.num_imports = mod_hwm;
17603 config.num_partitions = modules->length () - mod_hwm;
17604 memset (counts, 0, sizeof (counts));
17605
17606 /* depset::cluster is the cluster number,
17607 depset::section is unspecified scratch value.
17608
17609 The following loops make use of the tarjan property that
17610 dependencies will be earlier in the SCCS array. */
17611
17612 /* This first loop determines the number of depsets in each SCC, and
17613 also the number of namespaces we're dealing with. During the
17614 loop, the meaning of a couple of depset fields now change:
17615
17616 depset::cluster -> size_of cluster, if first of cluster & !namespace
17617 depset::section -> section number of cluster (if !namespace). */
17618
17619 unsigned n_spaces = 0;
17620 counts[MSC_sec_lwm] = counts[MSC_sec_hwm] = to->get_section_limit ();
17621 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
17622 {
17623 depset **base = &sccs[ix];
17624
17625 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
17626 {
17627 n_spaces++;
17628 size = 1;
17629 }
17630 else
17631 {
17632 /* Count the members in this cluster. */
17633 for (size = 1; ix + size < sccs.length (); size++)
17634 if (base[size]->cluster != base[0]->cluster)
17635 break;
17636
17637 for (unsigned jx = 0; jx != size; jx++)
17638 {
17639 /* Set the section number. */
17640 base[jx]->cluster = ~(~0u >> 1); /* A bad value. */
17641 base[jx]->section = counts[MSC_sec_hwm];
17642 }
17643
17644 /* Save the size in the first member's cluster slot. */
17645 base[0]->cluster = size;
17646
17647 counts[MSC_sec_hwm]++;
17648 }
17649 }
17650
17651 /* Write the clusters. Namespace decls are put in the spaces array.
17652 The meaning of depset::cluster changes to provide the
17653 unnamed-decl count of the depset's decl (and remains zero for
17654 non-decls and non-unnamed). */
17655 unsigned bytes = 0;
17656 vec<depset *> spaces;
17657 spaces.create (n_spaces);
17658
17659 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
17660 {
17661 depset **base = &sccs[ix];
17662
17663 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
17664 {
17665 tree decl = base[0]->get_entity ();
17666 if (decl == global_namespace)
17667 base[0]->cluster = 0;
17668 else if (!base[0]->is_import ())
17669 {
17670 base[0]->cluster = counts[MSC_entities]++;
17671 spaces.quick_push (base[0]);
17672 counts[MSC_namespaces]++;
17673 if (CHECKING_P)
17674 {
17675 /* Add it to the entity map, such that we can tell it is
17676 part of us. */
17677 bool existed;
17678 unsigned *slot = &entity_map->get_or_insert
17679 (DECL_UID (decl), &existed);
17680 if (existed)
17681 /* It must have come from a partition. */
17682 gcc_checking_assert
17683 (import_entity_module (*slot)->is_partition ());
17684 *slot = ~base[0]->cluster;
17685 }
17686 dump (dumper::CLUSTER) && dump ("Cluster namespace %N", decl);
17687 }
17688 size = 1;
17689 }
17690 else
17691 {
17692 size = base[0]->cluster;
17693
17694 /* Cluster is now used to number entities. */
17695 base[0]->cluster = ~(~0u >> 1); /* A bad value. */
17696
17697 sort_cluster (&table, base, size);
17698
17699 /* Record the section for consistency checking during stream
17700 out -- we don't want to start writing decls in different
17701 sections. */
17702 table.section = base[0]->section;
17703 bytes += write_cluster (to, base, size, table, counts, &crc);
17704 table.section = 0;
17705 }
17706 }
17707
17708 /* We'd better have written as many sections and found as many
17709 namespaces as we predicted. */
17710 gcc_assert (counts[MSC_sec_hwm] == to->get_section_limit ()
17711 && spaces.length () == counts[MSC_namespaces]);
17712
17713 /* Write the entitites. None happens if we contain namespaces or
17714 nothing. */
17715 if (counts[MSC_entities])
17716 write_entities (to, sccs, counts[MSC_entities], &crc);
17717
17718 /* Write the namespaces. */
17719 if (counts[MSC_namespaces])
17720 write_namespaces (to, spaces, counts[MSC_namespaces], &crc);
17721
17722 /* Write the bindings themselves. */
17723 counts[MSC_bindings] = write_bindings (to, sccs, &crc);
17724
17725 /* Write the unnamed. */
17726 if (counts[MSC_pendings])
17727 write_pendings (to, sccs, table, counts[MSC_pendings], &crc);
17728
17729 /* Write the import table. */
17730 if (config.num_imports > 1)
17731 write_imports (to, &crc);
17732
17733 /* Write elided partition table. */
17734 if (config.num_partitions)
17735 write_partitions (to, config.num_partitions, &crc);
17736
17737 /* Write the line maps. */
17738 write_ordinary_maps (to, map_info, &config, config.num_partitions, &crc);
17739 write_macro_maps (to, map_info, &config, &crc);
17740
17741 if (is_header ())
17742 {
17743 counts[MSC_macros] = write_macros (to, reader, &crc);
17744 counts[MSC_inits] = write_inits (to, table, &crc);
17745 }
17746
17747 unsigned clusters = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
17748 dump () && dump ("Wrote %u clusters, average %u bytes/cluster",
17749 clusters, (bytes + clusters / 2) / (clusters + !clusters));
17750
17751 write_counts (to, counts, &crc);
17752
17753 /* And finish up. */
17754 write_config (to, config, crc);
17755
17756 spaces.release ();
17757 sccs.release ();
17758
17759 /* Human-readable info. */
17760 write_readme (to, reader, config.dialect_str, extensions);
17761
17762 // FIXME:QOI: Have a command line switch to control more detailed
17763 // information (which might leak data you do not want to leak).
17764 // Perhaps (some of) the write_readme contents should also be
17765 // so-controlled.
17766 if (false)
17767 write_env (to);
17768
17769 trees_out::instrument ();
17770 dump () && dump ("Wrote %u sections", to->get_section_limit ());
17771 }
17772
17773 /* Initial read of a CMI. Checks config, loads up imports and line
17774 maps. */
17775
17776 bool
17777 module_state::read_initial (cpp_reader *reader)
17778 {
17779 module_state_config config;
17780 bool ok = true;
17781
17782 if (ok && !from ()->begin (loc))
17783 ok = false;
17784
17785 if (ok && !read_config (config))
17786 ok = false;
17787
17788 bool have_locs = ok && read_prepare_maps (&config);
17789
17790 /* Ordinary maps before the imports. */
17791 if (have_locs && !read_ordinary_maps ())
17792 ok = false;
17793
17794 /* Allocate the REMAP vector. */
17795 slurp->alloc_remap (config.num_imports);
17796
17797 if (ok)
17798 {
17799 /* Read the import table. Decrement current to stop this CMI
17800 from being evicted during the import. */
17801 slurp->current--;
17802 if (config.num_imports > 1 && !read_imports (reader, line_table))
17803 ok = false;
17804 slurp->current++;
17805 }
17806
17807 /* Read the elided partition table, if we're the primary partition. */
17808 if (ok && config.num_partitions && is_module ()
17809 && !read_partitions (config.num_partitions))
17810 ok = false;
17811
17812 /* Determine the module's number. */
17813 gcc_checking_assert (mod == MODULE_UNKNOWN);
17814 gcc_checking_assert (this != (*modules)[0]);
17815
17816 /* We'll run out of other resources before we run out of module
17817 indices. */
17818 mod = modules->length ();
17819 vec_safe_push (modules, this);
17820
17821 /* We always import and export ourselves. */
17822 bitmap_set_bit (imports, mod);
17823 bitmap_set_bit (exports, mod);
17824
17825 if (ok)
17826 (*slurp->remap)[0] = mod << 1;
17827 dump () && dump ("Assigning %M module number %u", this, mod);
17828
17829 /* We should not have been frozen during the importing done by
17830 read_config. */
17831 gcc_assert (!from ()->is_frozen ());
17832
17833 /* Macro maps after the imports. */
17834 if (ok && have_locs && !read_macro_maps ())
17835 ok = false;
17836
17837 gcc_assert (slurp->current == ~0u);
17838 return ok;
17839 }
17840
17841 /* Read a preprocessor state. */
17842
17843 bool
17844 module_state::read_preprocessor (bool outermost)
17845 {
17846 gcc_checking_assert (is_header () && slurp
17847 && slurp->remap_module (0) == mod);
17848
17849 if (loadedness == ML_PREPROCESSOR)
17850 return !(from () && from ()->get_error ());
17851
17852 bool ok = true;
17853
17854 /* Read direct header imports. */
17855 unsigned len = slurp->remap->length ();
17856 for (unsigned ix = 1; ok && ix != len; ix++)
17857 {
17858 unsigned map = (*slurp->remap)[ix];
17859 if (map & 1)
17860 {
17861 module_state *import = (*modules)[map >> 1];
17862 if (import->is_header ())
17863 {
17864 ok = import->read_preprocessor (false);
17865 bitmap_ior_into (slurp->headers, import->slurp->headers);
17866 }
17867 }
17868 }
17869
17870 /* Record as a direct header. */
17871 if (ok)
17872 bitmap_set_bit (slurp->headers, mod);
17873
17874 if (ok && !read_macros ())
17875 ok = false;
17876
17877 loadedness = ML_PREPROCESSOR;
17878 announce ("macros");
17879
17880 if (flag_preprocess_only)
17881 /* We're done with the string table. */
17882 from ()->release ();
17883
17884 return check_read (outermost, ok);
17885 }
17886
17887 static unsigned lazy_snum;
17888
17889 static bool
17890 recursive_lazy (unsigned snum = ~0u)
17891 {
17892 if (lazy_snum)
17893 {
17894 error_at (input_location, "recursive lazy load");
17895 return true;
17896 }
17897
17898 lazy_snum = snum;
17899 return false;
17900 }
17901
17902 /* Read language state. */
17903
17904 bool
17905 module_state::read_language (bool outermost)
17906 {
17907 gcc_checking_assert (!lazy_snum);
17908
17909 if (loadedness == ML_LANGUAGE)
17910 return !(slurp && from () && from ()->get_error ());
17911
17912 gcc_checking_assert (slurp && slurp->current == ~0u
17913 && slurp->remap_module (0) == mod);
17914
17915 bool ok = true;
17916
17917 /* Read direct imports. */
17918 unsigned len = slurp->remap->length ();
17919 for (unsigned ix = 1; ok && ix != len; ix++)
17920 {
17921 unsigned map = (*slurp->remap)[ix];
17922 if (map & 1)
17923 {
17924 module_state *import = (*modules)[map >> 1];
17925 if (!import->read_language (false))
17926 ok = false;
17927 }
17928 }
17929
17930 unsigned counts[MSC_HWM];
17931
17932 if (ok && !read_counts (counts))
17933 ok = false;
17934
17935 function_depth++; /* Prevent unexpected GCs. */
17936
17937 /* Read the entity table. */
17938 entity_lwm = vec_safe_length (entity_ary);
17939 if (ok && counts[MSC_entities]
17940 && !read_entities (counts[MSC_entities],
17941 counts[MSC_sec_lwm], counts[MSC_sec_hwm]))
17942 ok = false;
17943
17944 /* Read the namespace hierarchy. */
17945 if (ok && counts[MSC_namespaces]
17946 && !read_namespaces (counts[MSC_namespaces]))
17947 ok = false;
17948
17949 if (ok && !read_bindings (counts[MSC_bindings],
17950 counts[MSC_sec_lwm], counts[MSC_sec_hwm]))
17951 ok = false;
17952
17953 /* And unnamed. */
17954 if (ok && counts[MSC_pendings] && !read_pendings (counts[MSC_pendings]))
17955 ok = false;
17956
17957 if (ok)
17958 {
17959 slurp->remaining = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
17960 available_clusters += counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
17961 }
17962
17963 if (!flag_module_lazy
17964 || (is_partition ()
17965 && module_interface_p ()
17966 && !module_partition_p ()))
17967 {
17968 /* Read the sections in forward order, so that dependencies are read
17969 first. See note about tarjan_connect. */
17970 ggc_collect ();
17971
17972 lazy_snum = ~0u;
17973
17974 unsigned hwm = counts[MSC_sec_hwm];
17975 for (unsigned ix = counts[MSC_sec_lwm]; ok && ix != hwm; ix++)
17976 {
17977 if (!load_section (ix, NULL))
17978 {
17979 ok = false;
17980 break;
17981 }
17982 ggc_collect ();
17983 }
17984
17985 lazy_snum = 0;
17986
17987 if (ok && CHECKING_P)
17988 for (unsigned ix = 0; ix != entity_num; ix++)
17989 gcc_assert (!(*entity_ary)[ix + entity_lwm].is_lazy ());
17990 }
17991
17992 // If the import is a header-unit, we need to register initializers
17993 // of any static objects it contains (looking at you _Ioinit).
17994 // Notice, the ordering of these initializers will be that of a
17995 // dynamic initializer at this point in the current TU. (Other
17996 // instances of these objects in other TUs will be initialized as
17997 // part of that TU's global initializers.)
17998 if (ok && counts[MSC_inits] && !read_inits (counts[MSC_inits]))
17999 ok = false;
18000
18001 function_depth--;
18002
18003 announce (flag_module_lazy ? "lazy" : "imported");
18004 loadedness = ML_LANGUAGE;
18005
18006 gcc_assert (slurp->current == ~0u);
18007
18008 /* We're done with the string table. */
18009 from ()->release ();
18010
18011 return check_read (outermost, ok);
18012 }
18013
18014 bool
18015 module_state::maybe_defrost ()
18016 {
18017 bool ok = true;
18018 if (from ()->is_frozen ())
18019 {
18020 if (lazy_open >= lazy_limit)
18021 freeze_an_elf ();
18022 dump () && dump ("Defrosting '%s'", filename);
18023 ok = from ()->defrost (maybe_add_cmi_prefix (filename));
18024 lazy_open++;
18025 }
18026
18027 return ok;
18028 }
18029
18030 /* Load section SNUM, dealing with laziness. It doesn't matter if we
18031 have multiple concurrent loads, because we do not use TREE_VISITED
18032 when reading back in. */
18033
18034 bool
18035 module_state::load_section (unsigned snum, binding_slot *mslot)
18036 {
18037 if (from ()->get_error ())
18038 return false;
18039
18040 if (snum >= slurp->current)
18041 from ()->set_error (elf::E_BAD_LAZY);
18042 else if (maybe_defrost ())
18043 {
18044 unsigned old_current = slurp->current;
18045 slurp->current = snum;
18046 slurp->lru = 0; /* Do not swap out. */
18047 slurp->remaining--;
18048 read_cluster (snum);
18049 slurp->lru = ++lazy_lru;
18050 slurp->current = old_current;
18051 }
18052
18053 if (mslot && mslot->is_lazy ())
18054 {
18055 /* Oops, the section didn't set this slot. */
18056 from ()->set_error (elf::E_BAD_DATA);
18057 *mslot = NULL_TREE;
18058 }
18059
18060 bool ok = !from ()->get_error ();
18061 if (!ok)
18062 {
18063 error_at (loc, "failed to read compiled module cluster %u: %s",
18064 snum, from ()->get_error (filename));
18065 note_cmi_name ();
18066 }
18067
18068 maybe_completed_reading ();
18069
18070 return ok;
18071 }
18072
18073 void
18074 module_state::maybe_completed_reading ()
18075 {
18076 if (loadedness == ML_LANGUAGE && slurp->current == ~0u && !slurp->remaining)
18077 {
18078 lazy_open--;
18079 /* We no longer need the macros, all tokenizing has been done. */
18080 slurp->release_macros ();
18081
18082 from ()->end ();
18083 slurp->close ();
18084 slurped ();
18085 }
18086 }
18087
18088 /* After a reading operation, make sure things are still ok. If not,
18089 emit an error and clean up. */
18090
18091 bool
18092 module_state::check_read (bool outermost, bool ok)
18093 {
18094 gcc_checking_assert (!outermost || slurp->current == ~0u);
18095
18096 if (!ok)
18097 from ()->set_error ();
18098
18099 if (int e = from ()->get_error ())
18100 {
18101 error_at (loc, "failed to read compiled module: %s",
18102 from ()->get_error (filename));
18103 note_cmi_name ();
18104
18105 if (e == EMFILE
18106 || e == ENFILE
18107 #if MAPPED_READING
18108 || e == ENOMEM
18109 #endif
18110 || false)
18111 inform (loc, "consider using %<-fno-module-lazy%>,"
18112 " increasing %<-param-lazy-modules=%u%> value,"
18113 " or increasing the per-process file descriptor limit",
18114 param_lazy_modules);
18115 else if (e == ENOENT)
18116 inform (loc, "imports must be built before being imported");
18117
18118 if (outermost)
18119 fatal_error (loc, "returning to the gate for a mechanical issue");
18120
18121 ok = false;
18122 }
18123
18124 maybe_completed_reading ();
18125
18126 return ok;
18127 }
18128
18129 /* Return the IDENTIFIER_NODE naming module IX. This is the name
18130 including dots. */
18131
18132 char const *
18133 module_name (unsigned ix, bool header_ok)
18134 {
18135 if (modules)
18136 {
18137 module_state *imp = (*modules)[ix];
18138
18139 if (ix && !imp->name)
18140 imp = imp->parent;
18141
18142 if (header_ok || !imp->is_header ())
18143 return imp->get_flatname ();
18144 }
18145
18146 return NULL;
18147 }
18148
18149 /* Return the bitmap describing what modules are imported. Remember,
18150 we always import ourselves. */
18151
18152 bitmap
18153 get_import_bitmap ()
18154 {
18155 return (*modules)[0]->imports;
18156 }
18157
18158 /* Return the visible imports and path of instantiation for an
18159 instantiation at TINST. If TINST is nullptr, we're not in an
18160 instantiation, and thus will return the visible imports of the
18161 current TU (and NULL *PATH_MAP_P). We cache the information on
18162 the tinst level itself. */
18163
18164 static bitmap
18165 path_of_instantiation (tinst_level *tinst, bitmap *path_map_p)
18166 {
18167 gcc_checking_assert (modules_p ());
18168
18169 if (!tinst)
18170 {
18171 /* Not inside an instantiation, just the regular case. */
18172 *path_map_p = nullptr;
18173 return get_import_bitmap ();
18174 }
18175
18176 if (!tinst->path)
18177 {
18178 /* Calculate. */
18179 bitmap visible = path_of_instantiation (tinst->next, path_map_p);
18180 bitmap path_map = *path_map_p;
18181
18182 if (!path_map)
18183 {
18184 path_map = BITMAP_GGC_ALLOC ();
18185 bitmap_set_bit (path_map, 0);
18186 }
18187
18188 tree decl = tinst->tldcl;
18189 if (TREE_CODE (decl) == TREE_LIST)
18190 decl = TREE_PURPOSE (decl);
18191 if (TYPE_P (decl))
18192 decl = TYPE_NAME (decl);
18193
18194 if (unsigned mod = get_originating_module (decl))
18195 if (!bitmap_bit_p (path_map, mod))
18196 {
18197 /* This is brand new information! */
18198 bitmap new_path = BITMAP_GGC_ALLOC ();
18199 bitmap_copy (new_path, path_map);
18200 bitmap_set_bit (new_path, mod);
18201 path_map = new_path;
18202
18203 bitmap imports = (*modules)[mod]->imports;
18204 if (bitmap_intersect_compl_p (imports, visible))
18205 {
18206 /* IMPORTS contains additional modules to VISIBLE. */
18207 bitmap new_visible = BITMAP_GGC_ALLOC ();
18208
18209 bitmap_ior (new_visible, visible, imports);
18210 visible = new_visible;
18211 }
18212 }
18213
18214 tinst->path = path_map;
18215 tinst->visible = visible;
18216 }
18217
18218 *path_map_p = tinst->path;
18219 return tinst->visible;
18220 }
18221
18222 /* Return the bitmap describing what modules are visible along the
18223 path of instantiation. If we're not an instantiation, this will be
18224 the visible imports of the TU. *PATH_MAP_P is filled in with the
18225 modules owning the instantiation path -- we see the module-linkage
18226 entities of those modules. */
18227
18228 bitmap
18229 visible_instantiation_path (bitmap *path_map_p)
18230 {
18231 if (!modules_p ())
18232 return NULL;
18233
18234 return path_of_instantiation (current_instantiation (), path_map_p);
18235 }
18236
18237 /* We've just directly imported IMPORT. Update our import/export
18238 bitmaps. IS_EXPORT is true if we're reexporting the OTHER. */
18239
18240 void
18241 module_state::set_import (module_state const *import, bool is_export)
18242 {
18243 gcc_checking_assert (this != import);
18244
18245 /* We see IMPORT's exports (which includes IMPORT). If IMPORT is
18246 the primary interface or a partition we'll see its imports. */
18247 bitmap_ior_into (imports, import->is_module () || import->is_partition ()
18248 ? import->imports : import->exports);
18249
18250 if (is_export)
18251 /* We'll export OTHER's exports. */
18252 bitmap_ior_into (exports, import->exports);
18253 }
18254
18255 /* Return the declaring entity of DECL. That is the decl determining
18256 how to decorate DECL with module information. Returns NULL_TREE if
18257 it's the global module. */
18258
18259 tree
18260 get_originating_module_decl (tree decl)
18261 {
18262 /* An enumeration constant. */
18263 if (TREE_CODE (decl) == CONST_DECL
18264 && DECL_CONTEXT (decl)
18265 && (TREE_CODE (DECL_CONTEXT (decl)) == ENUMERAL_TYPE))
18266 decl = TYPE_NAME (DECL_CONTEXT (decl));
18267 else if (TREE_CODE (decl) == FIELD_DECL
18268 || TREE_CODE (decl) == USING_DECL)
18269 {
18270 decl = DECL_CONTEXT (decl);
18271 if (TREE_CODE (decl) != FUNCTION_DECL)
18272 decl = TYPE_NAME (decl);
18273 }
18274
18275 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
18276 || TREE_CODE (decl) == FUNCTION_DECL
18277 || TREE_CODE (decl) == TYPE_DECL
18278 || TREE_CODE (decl) == VAR_DECL
18279 || TREE_CODE (decl) == CONCEPT_DECL
18280 || TREE_CODE (decl) == NAMESPACE_DECL);
18281
18282 for (;;)
18283 {
18284 /* Uninstantiated template friends are owned by the befriending
18285 class -- not their context. */
18286 if (TREE_CODE (decl) == TEMPLATE_DECL
18287 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
18288 decl = TYPE_NAME (DECL_CHAIN (decl));
18289
18290 int use;
18291 if (tree ti = node_template_info (decl, use))
18292 {
18293 decl = TI_TEMPLATE (ti);
18294 if (TREE_CODE (decl) != TEMPLATE_DECL)
18295 {
18296 /* A friend template specialization. */
18297 gcc_checking_assert (OVL_P (decl));
18298 return global_namespace;
18299 }
18300 }
18301 else
18302 {
18303 tree ctx = CP_DECL_CONTEXT (decl);
18304 if (TREE_CODE (ctx) == NAMESPACE_DECL)
18305 break;
18306
18307 if (TYPE_P (ctx))
18308 {
18309 ctx = TYPE_NAME (ctx);
18310 if (!ctx)
18311 {
18312 /* Some kind of internal type. */
18313 gcc_checking_assert (DECL_ARTIFICIAL (decl));
18314 return global_namespace;
18315 }
18316 }
18317 decl = ctx;
18318 }
18319 }
18320
18321 return decl;
18322 }
18323
18324 int
18325 get_originating_module (tree decl, bool for_mangle)
18326 {
18327 tree owner = get_originating_module_decl (decl);
18328
18329 if (!DECL_LANG_SPECIFIC (owner))
18330 return for_mangle ? -1 : 0;
18331
18332 if (for_mangle
18333 && (DECL_MODULE_EXPORT_P (owner) || !DECL_MODULE_PURVIEW_P (owner)))
18334 return -1;
18335
18336 if (!DECL_MODULE_IMPORT_P (owner))
18337 return 0;
18338
18339 return get_importing_module (owner);
18340 }
18341
18342 unsigned
18343 get_importing_module (tree decl, bool flexible)
18344 {
18345 unsigned index = import_entity_index (decl, flexible);
18346 if (index == ~(~0u >> 1))
18347 return -1;
18348 module_state *module = import_entity_module (index);
18349
18350 return module->mod;
18351 }
18352
18353 /* Is it permissible to redeclare DECL. */
18354
18355 bool
18356 module_may_redeclare (tree decl)
18357 {
18358 module_state *me = (*modules)[0];
18359 module_state *them = me;
18360 if (DECL_LANG_SPECIFIC (decl) && DECL_MODULE_IMPORT_P (decl))
18361 {
18362 /* We can be given the TEMPLATE_RESULT. We want the
18363 TEMPLATE_DECL. */
18364 int use_tpl = -1;
18365 if (tree ti = node_template_info (decl, use_tpl))
18366 {
18367 tree tmpl = TI_TEMPLATE (ti);
18368 if (DECL_TEMPLATE_RESULT (tmpl) == decl)
18369 decl = tmpl;
18370 // FIXME: What about partial specializations? We need to
18371 // look at the specialization list in that case. Unless our
18372 // caller's given us the right thing. An alternative would
18373 // be to put both the template and the result into the
18374 // entity hash, but that seems expensive?
18375 }
18376 unsigned index = import_entity_index (decl);
18377 them = import_entity_module (index);
18378 }
18379
18380 if (them->is_header ())
18381 {
18382 if (!header_module_p ())
18383 return !module_purview_p ();
18384
18385 if (DECL_SOURCE_LOCATION (decl) == BUILTINS_LOCATION)
18386 /* This is a builtin, being declared in header-unit. We
18387 now need to mark it as an export. */
18388 DECL_MODULE_EXPORT_P (decl) = true;
18389
18390 /* If it came from a header, it's in the global module. */
18391 return true;
18392 }
18393
18394 if (me == them)
18395 return ((DECL_LANG_SPECIFIC (decl) && DECL_MODULE_PURVIEW_P (decl))
18396 == module_purview_p ());
18397
18398 if (!me->name)
18399 me = me->parent;
18400
18401 /* We can't have found a GMF entity from a named module. */
18402 gcc_checking_assert (DECL_LANG_SPECIFIC (decl)
18403 && DECL_MODULE_PURVIEW_P (decl));
18404
18405 return me && get_primary (them) == get_primary (me);
18406 }
18407
18408 /* DECL is being created by this TU. Record it came from here. We
18409 record module purview, so we can see if partial or explicit
18410 specialization needs to be written out, even though its purviewness
18411 comes from the most general template. */
18412
18413 void
18414 set_instantiating_module (tree decl)
18415 {
18416 gcc_assert (TREE_CODE (decl) == FUNCTION_DECL
18417 || TREE_CODE (decl) == VAR_DECL
18418 || TREE_CODE (decl) == TYPE_DECL
18419 || TREE_CODE (decl) == CONCEPT_DECL
18420 || TREE_CODE (decl) == TEMPLATE_DECL
18421 || (TREE_CODE (decl) == NAMESPACE_DECL
18422 && DECL_NAMESPACE_ALIAS (decl)));
18423
18424 if (!modules_p ())
18425 return;
18426
18427 if (!DECL_LANG_SPECIFIC (decl) && module_purview_p ())
18428 retrofit_lang_decl (decl);
18429 if (DECL_LANG_SPECIFIC (decl))
18430 {
18431 DECL_MODULE_PURVIEW_P (decl) = module_purview_p ();
18432 /* If this was imported, we'll still be in the entity_hash. */
18433 DECL_MODULE_IMPORT_P (decl) = false;
18434 if (TREE_CODE (decl) == TEMPLATE_DECL)
18435 {
18436 tree res = DECL_TEMPLATE_RESULT (decl);
18437 retrofit_lang_decl (res);
18438 DECL_MODULE_PURVIEW_P (res) = DECL_MODULE_PURVIEW_P (decl);
18439 DECL_MODULE_IMPORT_P (res) = false;
18440 }
18441 }
18442 }
18443
18444 /* If DECL is a class member, whose class is not defined in this TU
18445 (it was imported), remember this decl. */
18446
18447 void
18448 set_defining_module (tree decl)
18449 {
18450 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
18451 || !DECL_MODULE_IMPORT_P (decl));
18452
18453 if (module_has_cmi_p ())
18454 {
18455 tree ctx = DECL_CONTEXT (decl);
18456 if (ctx
18457 && (TREE_CODE (ctx) == RECORD_TYPE || TREE_CODE (ctx) == UNION_TYPE)
18458 && DECL_LANG_SPECIFIC (TYPE_NAME (ctx))
18459 && DECL_MODULE_IMPORT_P (TYPE_NAME (ctx)))
18460 {
18461 /* This entity's context is from an import. We may need to
18462 record this entity to make sure we emit it in the CMI.
18463 Template specializations are in the template hash tables,
18464 so we don't need to record them here as well. */
18465 int use_tpl = -1;
18466 tree ti = node_template_info (decl, use_tpl);
18467 if (use_tpl <= 0)
18468 {
18469 if (ti)
18470 {
18471 gcc_checking_assert (!use_tpl);
18472 /* Get to the TEMPLATE_DECL. */
18473 decl = TI_TEMPLATE (ti);
18474 }
18475
18476 /* Record it on the class_members list. */
18477 vec_safe_push (class_members, decl);
18478 }
18479 }
18480 else if (DECL_IMPLICIT_TYPEDEF_P (decl)
18481 && CLASSTYPE_TEMPLATE_SPECIALIZATION (TREE_TYPE (decl)))
18482 /* This is a partial or explicit specialization. */
18483 vec_safe_push (partial_specializations, decl);
18484 }
18485 }
18486
18487 void
18488 set_originating_module (tree decl, bool friend_p ATTRIBUTE_UNUSED)
18489 {
18490 set_instantiating_module (decl);
18491
18492 if (TREE_CODE (CP_DECL_CONTEXT (decl)) != NAMESPACE_DECL)
18493 return;
18494
18495 gcc_checking_assert (friend_p || decl == get_originating_module_decl (decl));
18496
18497 if (!module_exporting_p ())
18498 return;
18499
18500 // FIXME: Check ill-formed linkage
18501 DECL_MODULE_EXPORT_P (decl) = true;
18502 }
18503
18504 /* DECL is attached to ROOT for odr purposes. */
18505
18506 void
18507 maybe_attach_decl (tree ctx, tree decl)
18508 {
18509 if (!modules_p ())
18510 return;
18511
18512 // FIXME: For now just deal with lambdas attached to var decls.
18513 // This might be sufficient?
18514 if (TREE_CODE (ctx) != VAR_DECL)
18515 return;
18516
18517 gcc_checking_assert (DECL_NAMESPACE_SCOPE_P (ctx));
18518
18519 if (!attached_table)
18520 attached_table = new attachset::hash (EXPERIMENT (1, 400));
18521
18522 if (attached_table->add (DECL_UID (ctx), decl))
18523 {
18524 retrofit_lang_decl (ctx);
18525 DECL_MODULE_ATTACHMENTS_P (ctx) = true;
18526 }
18527 }
18528
18529 /* Create the flat name string. It is simplest to have it handy. */
18530
18531 void
18532 module_state::set_flatname ()
18533 {
18534 gcc_checking_assert (!flatname);
18535 if (parent)
18536 {
18537 auto_vec<tree,5> ids;
18538 size_t len = 0;
18539 char const *primary = NULL;
18540 size_t pfx_len = 0;
18541
18542 for (module_state *probe = this;
18543 probe;
18544 probe = probe->parent)
18545 if (is_partition () && !probe->is_partition ())
18546 {
18547 primary = probe->get_flatname ();
18548 pfx_len = strlen (primary);
18549 break;
18550 }
18551 else
18552 {
18553 ids.safe_push (probe->name);
18554 len += IDENTIFIER_LENGTH (probe->name) + 1;
18555 }
18556
18557 char *flat = XNEWVEC (char, pfx_len + len + is_partition ());
18558 flatname = flat;
18559
18560 if (primary)
18561 {
18562 memcpy (flat, primary, pfx_len);
18563 flat += pfx_len;
18564 *flat++ = ':';
18565 }
18566
18567 for (unsigned len = 0; ids.length ();)
18568 {
18569 if (len)
18570 flat[len++] = '.';
18571 tree elt = ids.pop ();
18572 unsigned l = IDENTIFIER_LENGTH (elt);
18573 memcpy (flat + len, IDENTIFIER_POINTER (elt), l + 1);
18574 len += l;
18575 }
18576 }
18577 else if (is_header ())
18578 flatname = TREE_STRING_POINTER (name);
18579 else
18580 flatname = IDENTIFIER_POINTER (name);
18581 }
18582
18583 /* Read the CMI file for a module. */
18584
18585 bool
18586 module_state::do_import (cpp_reader *reader, bool outermost)
18587 {
18588 gcc_assert (global_namespace == current_scope () && loadedness == ML_NONE);
18589
18590 loc = linemap_module_loc (line_table, loc, get_flatname ());
18591
18592 if (lazy_open >= lazy_limit)
18593 freeze_an_elf ();
18594
18595 int fd = -1;
18596 int e = ENOENT;
18597 if (filename)
18598 {
18599 const char *file = maybe_add_cmi_prefix (filename);
18600 dump () && dump ("CMI is %s", file);
18601 fd = open (file, O_RDONLY | O_CLOEXEC | O_BINARY);
18602 e = errno;
18603 }
18604
18605 gcc_checking_assert (!slurp);
18606 slurp = new slurping (new elf_in (fd, e));
18607
18608 bool ok = true;
18609 if (!from ()->get_error ())
18610 {
18611 announce ("importing");
18612 loadedness = ML_CONFIG;
18613 lazy_open++;
18614 ok = read_initial (reader);
18615 slurp->lru = ++lazy_lru;
18616 }
18617
18618 gcc_assert (slurp->current == ~0u);
18619
18620 return check_read (outermost, ok);
18621 }
18622
18623 /* Attempt to increase the file descriptor limit. */
18624
18625 static bool
18626 try_increase_lazy (unsigned want)
18627 {
18628 gcc_checking_assert (lazy_open >= lazy_limit);
18629
18630 /* If we're increasing, saturate at hard limit. */
18631 if (want > lazy_hard_limit && lazy_limit < lazy_hard_limit)
18632 want = lazy_hard_limit;
18633
18634 #if HAVE_SETRLIMIT
18635 if ((!lazy_limit || !param_lazy_modules)
18636 && lazy_hard_limit
18637 && want <= lazy_hard_limit)
18638 {
18639 struct rlimit rlimit;
18640 rlimit.rlim_cur = want + LAZY_HEADROOM;
18641 rlimit.rlim_max = lazy_hard_limit + LAZY_HEADROOM;
18642 if (!setrlimit (RLIMIT_NOFILE, &rlimit))
18643 lazy_limit = want;
18644 }
18645 #endif
18646
18647 return lazy_open < lazy_limit;
18648 }
18649
18650 /* Pick a victim module to freeze its reader. */
18651
18652 void
18653 module_state::freeze_an_elf ()
18654 {
18655 if (try_increase_lazy (lazy_open * 2))
18656 return;
18657
18658 module_state *victim = NULL;
18659 for (unsigned ix = modules->length (); ix--;)
18660 {
18661 module_state *candidate = (*modules)[ix];
18662 if (candidate && candidate->slurp && candidate->slurp->lru
18663 && candidate->from ()->is_freezable ()
18664 && (!victim || victim->slurp->lru > candidate->slurp->lru))
18665 victim = candidate;
18666 }
18667
18668 if (victim)
18669 {
18670 dump () && dump ("Freezing '%s'", victim->filename);
18671 if (victim->slurp->macro_defs.size)
18672 /* Save the macro definitions to a buffer. */
18673 victim->from ()->preserve (victim->slurp->macro_defs);
18674 if (victim->slurp->macro_tbl.size)
18675 /* Save the macro definitions to a buffer. */
18676 victim->from ()->preserve (victim->slurp->macro_tbl);
18677 victim->from ()->freeze ();
18678 lazy_open--;
18679 }
18680 else
18681 dump () && dump ("No module available for freezing");
18682 }
18683
18684 /* Load the lazy slot *MSLOT, INDEX'th slot of the module. */
18685
18686 bool
18687 module_state::lazy_load (unsigned index, binding_slot *mslot)
18688 {
18689 unsigned n = dump.push (this);
18690
18691 gcc_checking_assert (function_depth);
18692
18693 unsigned cookie = mslot->get_lazy ();
18694 unsigned snum = cookie >> 2;
18695 dump () && dump ("Loading entity %M[%u] section:%u", this, index, snum);
18696
18697 bool ok = load_section (snum, mslot);
18698
18699 dump.pop (n);
18700
18701 return ok;
18702 }
18703
18704 /* Load MOD's binding for NS::ID into *MSLOT. *MSLOT contains the
18705 lazy cookie. OUTER is true if this is the outermost lazy, (used
18706 for diagnostics). */
18707
18708 void
18709 lazy_load_binding (unsigned mod, tree ns, tree id, binding_slot *mslot)
18710 {
18711 int count = errorcount + warningcount;
18712
18713 timevar_start (TV_MODULE_IMPORT);
18714
18715 /* Stop GC happening, even in outermost loads (because our caller
18716 could well be building up a lookup set). */
18717 function_depth++;
18718
18719 gcc_checking_assert (mod);
18720 module_state *module = (*modules)[mod];
18721 unsigned n = dump.push (module);
18722
18723 unsigned snum = mslot->get_lazy ();
18724 dump () && dump ("Lazily binding %P@%N section:%u", ns, id,
18725 module->name, snum);
18726
18727 bool ok = !recursive_lazy (snum);
18728 if (ok)
18729 {
18730 ok = module->load_section (snum, mslot);
18731 lazy_snum = 0;
18732 }
18733
18734 dump.pop (n);
18735
18736 function_depth--;
18737
18738 timevar_stop (TV_MODULE_IMPORT);
18739
18740 if (!ok)
18741 fatal_error (input_location,
18742 module->is_header ()
18743 ? G_("failed to load binding %<%E%s%E%>")
18744 : G_("failed to load binding %<%E%s%E@%s%>"),
18745 ns, &"::"[ns == global_namespace ? 2 : 0], id,
18746 module->get_flatname ());
18747
18748 if (count != errorcount + warningcount)
18749 inform (input_location,
18750 module->is_header ()
18751 ? G_("during load of binding %<%E%s%E%>")
18752 : G_("during load of binding %<%E%s%E@%s%>"),
18753 ns, &"::"[ns == global_namespace ? 2 : 0], id,
18754 module->get_flatname ());
18755 }
18756
18757 /* Load any pending specializations of TMPL. Called just before
18758 instantiating TMPL. */
18759
18760 void
18761 lazy_load_specializations (tree tmpl)
18762 {
18763 gcc_checking_assert (DECL_MODULE_PENDING_SPECIALIZATIONS_P (tmpl)
18764 && DECL_MODULE_ENTITY_P (tmpl));
18765
18766 int count = errorcount + warningcount;
18767
18768 timevar_start (TV_MODULE_IMPORT);
18769 bool ok = !recursive_lazy ();
18770 if (ok)
18771 {
18772 unsigned ident = import_entity_index (tmpl);
18773 if (pendset *set = pending_table->get (ident, true))
18774 {
18775 function_depth++; /* Prevent GC */
18776 unsigned n = dump.push (NULL);
18777 dump ()
18778 && dump ("Reading %u pending specializations keyed to %M[%u] %N",
18779 set->num, import_entity_module (ident),
18780 ident - import_entity_module (ident)->entity_lwm, tmpl);
18781 if (!pendset_lazy_load (set, true))
18782 ok = false;
18783 dump.pop (n);
18784
18785 function_depth--;
18786 }
18787 lazy_snum = 0;
18788 }
18789
18790 timevar_stop (TV_MODULE_IMPORT);
18791
18792 if (!ok)
18793 fatal_error (input_location, "failed to load specializations keyed to %qD",
18794 tmpl);
18795
18796 if (count != errorcount + warningcount)
18797 inform (input_location,
18798 "during load of specializations keyed to %qD", tmpl);
18799 }
18800
18801 void
18802 lazy_load_members (tree decl)
18803 {
18804 gcc_checking_assert (DECL_MODULE_PENDING_MEMBERS_P (decl));
18805 if (!DECL_MODULE_ENTITY_P (decl))
18806 {
18807 // FIXME: I can't help feeling that DECL_TEMPLATE_RESULT should
18808 // be inserted into the entity map, or perhaps have the same
18809 // DECL_UID as the template, so I don't have to do this dance
18810 // here and elsewhere. It also simplifies when DECL is a
18811 // partial specialization. (also noted elsewhere as an issue)
18812 tree ti = CLASSTYPE_TEMPLATE_INFO (TREE_TYPE (decl));
18813 tree tmpl = TI_TEMPLATE (ti);
18814 gcc_checking_assert (DECL_TEMPLATE_RESULT (tmpl) == decl);
18815 decl = tmpl;
18816 }
18817
18818 timevar_start (TV_MODULE_IMPORT);
18819 unsigned ident = import_entity_index (decl);
18820 if (pendset *set = pending_table->get (~ident, true))
18821 {
18822 function_depth++; /* Prevent GC */
18823 unsigned n = dump.push (NULL);
18824 dump () && dump ("Reading %u pending members keyed to %M[%u] %N",
18825 set->num, import_entity_module (ident),
18826 ident - import_entity_module (ident)->entity_lwm, decl);
18827 pendset_lazy_load (set, false);
18828 dump.pop (n);
18829
18830 function_depth--;
18831 }
18832 timevar_stop (TV_MODULE_IMPORT);
18833 }
18834
18835 static void
18836 direct_import (module_state *import, cpp_reader *reader)
18837 {
18838 timevar_start (TV_MODULE_IMPORT);
18839 unsigned n = dump.push (import);
18840
18841 gcc_checking_assert (import->is_direct () && import->is_rooted ());
18842 if (import->loadedness == ML_NONE)
18843 if (!import->do_import (reader, true))
18844 gcc_unreachable ();
18845
18846 if (import->loadedness < ML_LANGUAGE)
18847 {
18848 if (!attached_table)
18849 attached_table = new attachset::hash (EXPERIMENT (1, 400));
18850 import->read_language (true);
18851 }
18852
18853 (*modules)[0]->set_import (import, import->exported_p);
18854
18855 dump.pop (n);
18856 timevar_stop (TV_MODULE_IMPORT);
18857 }
18858
18859 /* Import module IMPORT. */
18860
18861 void
18862 import_module (module_state *import, location_t from_loc, bool exporting_p,
18863 tree, cpp_reader *reader)
18864 {
18865 if (!import->check_not_purview (from_loc))
18866 return;
18867
18868 if (!import->is_header () && current_lang_depth ())
18869 /* Only header units should appear inside language
18870 specifications. The std doesn't specify this, but I think
18871 that's an error in resolving US 033, because language linkage
18872 is also our escape clause to getting things into the global
18873 module, so we don't want to confuse things by having to think
18874 about whether 'extern "C++" { import foo; }' puts foo's
18875 contents into the global module all of a sudden. */
18876 warning (0, "import of named module %qs inside language-linkage block",
18877 import->get_flatname ());
18878
18879 if (exporting_p || module_exporting_p ())
18880 import->exported_p = true;
18881
18882 if (import->loadedness != ML_NONE)
18883 {
18884 from_loc = ordinary_loc_of (line_table, from_loc);
18885 linemap_module_reparent (line_table, import->loc, from_loc);
18886 }
18887 gcc_checking_assert (!import->module_p);
18888 gcc_checking_assert (import->is_direct () && import->is_rooted ());
18889
18890 direct_import (import, reader);
18891 }
18892
18893 /* Declare the name of the current module to be NAME. EXPORTING_p is
18894 true if this TU is the exporting module unit. */
18895
18896 void
18897 declare_module (module_state *module, location_t from_loc, bool exporting_p,
18898 tree, cpp_reader *reader)
18899 {
18900 gcc_assert (global_namespace == current_scope ());
18901
18902 module_state *current = (*modules)[0];
18903 if (module_purview_p () || module->loadedness != ML_NONE)
18904 {
18905 error_at (from_loc, module_purview_p ()
18906 ? G_("module already declared")
18907 : G_("module already imported"));
18908 if (module_purview_p ())
18909 module = current;
18910 inform (module->loc, module_purview_p ()
18911 ? G_("module %qs declared here")
18912 : G_("module %qs imported here"),
18913 module->get_flatname ());
18914 return;
18915 }
18916
18917 gcc_checking_assert (module->module_p);
18918 gcc_checking_assert (module->is_direct () && module->is_rooted ());
18919
18920 /* Yer a module, 'arry. */
18921 module_kind &= ~MK_GLOBAL;
18922 module_kind |= MK_MODULE;
18923
18924 if (module->is_partition () || exporting_p)
18925 {
18926 gcc_checking_assert (module->get_flatname ());
18927
18928 if (module->is_partition ())
18929 module_kind |= MK_PARTITION;
18930
18931 if (exporting_p)
18932 {
18933 module->interface_p = true;
18934 module_kind |= MK_INTERFACE;
18935 }
18936
18937 if (module->is_header ())
18938 module_kind |= MK_GLOBAL | MK_EXPORTING;
18939
18940 /* Copy the importing information we may have already done. We
18941 do not need to separate out the imports that only happen in
18942 the GMF, inspite of what the literal wording of the std
18943 might imply. See p2191, the core list had a discussion
18944 where the module implementors agreed that the GMF of a named
18945 module is invisible to importers. */
18946 module->imports = current->imports;
18947
18948 module->mod = 0;
18949 (*modules)[0] = module;
18950 }
18951 else
18952 {
18953 module->interface_p = true;
18954 current->parent = module; /* So mangler knows module identity. */
18955 direct_import (module, reader);
18956 }
18957 }
18958
18959 /* +1, we're the primary or a partition. Therefore emitting a
18960 globally-callable idemportent initializer function.
18961 -1, we have direct imports. Therefore emitting calls to their
18962 initializers. */
18963
18964 int
18965 module_initializer_kind ()
18966 {
18967 int result = 0;
18968
18969 if (module_has_cmi_p () && !header_module_p ())
18970 result = +1;
18971 else if (num_init_calls_needed)
18972 result = -1;
18973
18974 return result;
18975 }
18976
18977 /* Emit calls to each direct import's global initializer. Including
18978 direct imports of directly imported header units. The initializers
18979 of (static) entities in header units will be called by their
18980 importing modules (for the instance contained within that), or by
18981 the current TU (for the instances we've brought in). Of course
18982 such header unit behaviour is evil, but iostream went through that
18983 door some time ago. */
18984
18985 void
18986 module_add_import_initializers ()
18987 {
18988 unsigned calls = 0;
18989 if (modules)
18990 {
18991 tree fntype = build_function_type (void_type_node, void_list_node);
18992 releasing_vec args; // There are no args
18993
18994 for (unsigned ix = modules->length (); --ix;)
18995 {
18996 module_state *import = (*modules)[ix];
18997 if (import->call_init_p)
18998 {
18999 tree name = mangle_module_global_init (ix);
19000 tree fndecl = build_lang_decl (FUNCTION_DECL, name, fntype);
19001
19002 DECL_CONTEXT (fndecl) = FROB_CONTEXT (global_namespace);
19003 SET_DECL_ASSEMBLER_NAME (fndecl, name);
19004 TREE_PUBLIC (fndecl) = true;
19005 determine_visibility (fndecl);
19006
19007 tree call = cp_build_function_call_vec (fndecl, &args,
19008 tf_warning_or_error);
19009 finish_expr_stmt (call);
19010
19011 calls++;
19012 }
19013 }
19014 }
19015
19016 gcc_checking_assert (calls == num_init_calls_needed);
19017 }
19018
19019 /* NAME & LEN are a preprocessed header name, possibly including the
19020 surrounding "" or <> characters. Return the raw string name of the
19021 module to which it refers. This will be an absolute path, or begin
19022 with ./, so it is immediately distinguishable from a (non-header
19023 unit) module name. If READER is non-null, ask the preprocessor to
19024 locate the header to which it refers using the appropriate include
19025 path. Note that we do never do \ processing of the string, as that
19026 matches the preprocessor's behaviour. */
19027
19028 static const char *
19029 canonicalize_header_name (cpp_reader *reader, location_t loc, bool unquoted,
19030 const char *str, size_t &len_r)
19031 {
19032 size_t len = len_r;
19033 static char *buf = 0;
19034 static size_t alloc = 0;
19035
19036 if (!unquoted)
19037 {
19038 gcc_checking_assert (len >= 2
19039 && ((reader && str[0] == '<' && str[len-1] == '>')
19040 || (str[0] == '"' && str[len-1] == '"')));
19041 str += 1;
19042 len -= 2;
19043 }
19044
19045 if (reader)
19046 {
19047 gcc_assert (!unquoted);
19048
19049 if (len >= alloc)
19050 {
19051 alloc = len + 1;
19052 buf = XRESIZEVEC (char, buf, alloc);
19053 }
19054 memcpy (buf, str, len);
19055 buf[len] = 0;
19056
19057 if (const char *hdr
19058 = cpp_find_header_unit (reader, buf, str[-1] == '<', loc))
19059 {
19060 len = strlen (hdr);
19061 str = hdr;
19062 }
19063 else
19064 str = buf;
19065 }
19066
19067 if (!(str[0] == '.' ? IS_DIR_SEPARATOR (str[1]) : IS_ABSOLUTE_PATH (str)))
19068 {
19069 /* Prepend './' */
19070 if (len + 3 > alloc)
19071 {
19072 alloc = len + 3;
19073 buf = XRESIZEVEC (char, buf, alloc);
19074 }
19075
19076 buf[0] = '.';
19077 buf[1] = DIR_SEPARATOR;
19078 memmove (buf + 2, str, len);
19079 len += 2;
19080 buf[len] = 0;
19081 str = buf;
19082 }
19083
19084 len_r = len;
19085 return str;
19086 }
19087
19088 /* Set the CMI name from a cody packet. Issue an error if
19089 ill-formed. */
19090
19091 void module_state::set_filename (const Cody::Packet &packet)
19092 {
19093 gcc_checking_assert (!filename);
19094 if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19095 filename = xstrdup (packet.GetString ().c_str ());
19096 else
19097 {
19098 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19099 error_at (loc, "unknown Compiled Module Interface: %s",
19100 packet.GetString ().c_str ());
19101 }
19102 }
19103
19104 /* Figure out whether to treat HEADER as an include or an import. */
19105
19106 static char *
19107 maybe_translate_include (cpp_reader *reader, line_maps *lmaps, location_t loc,
19108 const char *path)
19109 {
19110 if (!modules_p ())
19111 {
19112 /* Turn off. */
19113 cpp_get_callbacks (reader)->translate_include = NULL;
19114 return nullptr;
19115 }
19116
19117 if (!spans.init_p ())
19118 /* Before the main file, don't divert. */
19119 return nullptr;
19120
19121 dump.push (NULL);
19122
19123 dump () && dump ("Checking include translation '%s'", path);
19124 auto *mapper = get_mapper (cpp_main_loc (reader));
19125
19126 size_t len = strlen (path);
19127 path = canonicalize_header_name (NULL, loc, true, path, len);
19128 auto packet = mapper->IncludeTranslate (path, Cody::Flags::None, len);
19129 int xlate = false;
19130 if (packet.GetCode () == Cody::Client::PC_BOOL)
19131 xlate = -int (packet.GetInteger ());
19132 else if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19133 {
19134 /* Record the CMI name for when we do the import. */
19135 module_state *import = get_module (build_string (len, path));
19136 import->set_filename (packet);
19137 xlate = +1;
19138 }
19139 else
19140 {
19141 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19142 error_at (loc, "cannot determine %<#include%> translation of %s: %s",
19143 path, packet.GetString ().c_str ());
19144 }
19145
19146 bool note = false;
19147 if (note_include_translate_yes && xlate > 1)
19148 note = true;
19149 else if (note_include_translate_no && xlate == 0)
19150 note = true;
19151 else if (note_includes)
19152 {
19153 /* We do not expect the note_includes vector to be large, so O(N)
19154 iteration. */
19155 for (unsigned ix = note_includes->length (); !note && ix--;)
19156 {
19157 const char *hdr = (*note_includes)[ix];
19158 size_t hdr_len = strlen (hdr);
19159 if ((hdr_len == len
19160 || (hdr_len < len && IS_DIR_SEPARATOR (path[len - hdr_len - 1])))
19161 && !memcmp (hdr, path + len - hdr_len, hdr_len))
19162 note = true;
19163 }
19164 }
19165
19166 if (note)
19167 inform (loc, xlate
19168 ? G_("include %qs translated to import")
19169 : G_("include %qs processed textually") , path);
19170
19171 dump () && dump (xlate ? "Translating include to import"
19172 : "Keeping include as include");
19173 dump.pop (0);
19174
19175 if (!(xlate > 0))
19176 return nullptr;
19177
19178 /* Create the translation text. */
19179 loc = ordinary_loc_of (lmaps, loc);
19180 const line_map_ordinary *map
19181 = linemap_check_ordinary (linemap_lookup (lmaps, loc));
19182 unsigned col = SOURCE_COLUMN (map, loc);
19183 col -= (col != 0); /* Columns are 1-based. */
19184
19185 unsigned alloc = len + col + 60;
19186 char *res = XNEWVEC (char, alloc);
19187
19188 strcpy (res, "__import");
19189 unsigned actual = 8;
19190 if (col > actual)
19191 {
19192 /* Pad out so the filename appears at the same position. */
19193 memset (res + actual, ' ', col - actual);
19194 actual = col;
19195 }
19196 /* No need to encode characters, that's not how header names are
19197 handled. */
19198 actual += snprintf (res + actual, alloc - actual,
19199 "\"%s\" [[__translated]];\n", path);
19200 gcc_checking_assert (actual < alloc);
19201
19202 /* cpplib will delete the buffer. */
19203 return res;
19204 }
19205
19206 static void
19207 begin_header_unit (cpp_reader *reader)
19208 {
19209 /* Set the module header name from the main_input_filename. */
19210 const char *main = main_input_filename;
19211 size_t len = strlen (main);
19212 main = canonicalize_header_name (NULL, 0, true, main, len);
19213 module_state *module = get_module (build_string (len, main));
19214
19215 preprocess_module (module, cpp_main_loc (reader), false, false, true, reader);
19216 }
19217
19218 /* We've just properly entered the main source file. I.e. after the
19219 command line, builtins and forced headers. Record the line map and
19220 location of this map. Note we may be called more than once. The
19221 first call sticks. */
19222
19223 void
19224 module_begin_main_file (cpp_reader *reader, line_maps *lmaps,
19225 const line_map_ordinary *map)
19226 {
19227 gcc_checking_assert (lmaps == line_table);
19228 if (modules_p () && !spans.init_p ())
19229 {
19230 unsigned n = dump.push (NULL);
19231 spans.init (lmaps, map);
19232 dump.pop (n);
19233 if (flag_header_unit && !cpp_get_options (reader)->preprocessed)
19234 {
19235 /* Tell the preprocessor this is an include file. */
19236 cpp_retrofit_as_include (reader);
19237 begin_header_unit (reader);
19238 }
19239 }
19240 }
19241
19242 /* We've just lexed a module-specific control line for MODULE. Mark
19243 the module as a direct import, and possibly load up its macro
19244 state. Returns the primary module, if this is a module
19245 declaration. */
19246 /* Perhaps we should offer a preprocessing mode where we read the
19247 directives from the header unit, rather than require the header's
19248 CMI. */
19249
19250 module_state *
19251 preprocess_module (module_state *module, location_t from_loc,
19252 bool in_purview, bool is_import, bool is_export,
19253 cpp_reader *reader)
19254 {
19255 if (!is_import)
19256 {
19257 if (module->loc)
19258 /* It's already been mentioned, so ignore its module-ness. */
19259 is_import = true;
19260 else
19261 {
19262 /* Record it is the module. */
19263 module->module_p = true;
19264 if (is_export)
19265 {
19266 module->exported_p = true;
19267 module->interface_p = true;
19268 }
19269 }
19270 }
19271
19272 if (module->directness < MD_DIRECT + in_purview)
19273 {
19274 /* Mark as a direct import. */
19275 module->directness = module_directness (MD_DIRECT + in_purview);
19276
19277 /* Set the location to be most informative for users. */
19278 from_loc = ordinary_loc_of (line_table, from_loc);
19279 if (module->loadedness != ML_NONE)
19280 linemap_module_reparent (line_table, module->loc, from_loc);
19281 else
19282 {
19283 module->loc = from_loc;
19284 if (!module->flatname)
19285 module->set_flatname ();
19286 }
19287 }
19288
19289 if (is_import
19290 && !module->is_module () && module->is_header ()
19291 && module->loadedness < ML_PREPROCESSOR
19292 && (!cpp_get_options (reader)->preprocessed
19293 || cpp_get_options (reader)->directives_only))
19294 {
19295 timevar_start (TV_MODULE_IMPORT);
19296 unsigned n = dump.push (module);
19297
19298 if (module->loadedness == ML_NONE)
19299 {
19300 unsigned pre_hwm = 0;
19301
19302 /* Preserve the state of the line-map. */
19303 pre_hwm = LINEMAPS_ORDINARY_USED (line_table);
19304 /* We only need to close the span, if we're going to emit a
19305 CMI. But that's a little tricky -- our token scanner
19306 needs to be smarter -- and this isn't much state.
19307 Remember, we've not parsed anything at this point, so
19308 our module state flags are inadequate. */
19309 spans.maybe_init ();
19310 spans.close ();
19311
19312 if (!module->filename)
19313 {
19314 auto *mapper = get_mapper (cpp_main_loc (reader));
19315 auto packet = mapper->ModuleImport (module->get_flatname ());
19316 module->set_filename (packet);
19317 }
19318 module->do_import (reader, true);
19319
19320 /* Restore the line-map state. */
19321 linemap_module_restore (line_table, pre_hwm);
19322 spans.open ();
19323 }
19324
19325 if (module->loadedness < ML_PREPROCESSOR)
19326 if (module->read_preprocessor (true))
19327 module->import_macros ();
19328
19329 dump.pop (n);
19330 timevar_stop (TV_MODULE_IMPORT);
19331 }
19332
19333 return is_import ? NULL : get_primary (module);
19334 }
19335
19336 /* We've completed phase-4 translation. Emit any dependency
19337 information for the not-yet-loaded direct imports, and fill in
19338 their file names. We'll have already loaded up the direct header
19339 unit wavefront. */
19340
19341 void
19342 preprocessed_module (cpp_reader *reader)
19343 {
19344 auto *mapper = get_mapper (cpp_main_loc (reader));
19345
19346 spans.maybe_init ();
19347 spans.close ();
19348
19349 /* Stupid GTY doesn't grok a typedef here. And using type = is, too
19350 modern. */
19351 #define iterator hash_table<module_state_hash>::iterator
19352 /* using iterator = hash_table<module_state_hash>::iterator; */
19353
19354 /* Walk the module hash, asking for the names of all unknown
19355 direct imports and informing of an export (if that's what we
19356 are). Notice these are emitted even when preprocessing as they
19357 inform the server of dependency edges. */
19358 timevar_start (TV_MODULE_MAPPER);
19359
19360 dump.push (NULL);
19361 dump () && dump ("Resolving direct import names");
19362
19363 if (!flag_preprocess_only
19364 || bool (mapper->get_flags () & Cody::Flags::NameOnly)
19365 || cpp_get_deps (reader))
19366 {
19367 mapper->Cork ();
19368 iterator end = modules_hash->end ();
19369 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19370 {
19371 module_state *module = *iter;
19372 if (module->is_direct () && !module->filename)
19373 {
19374 Cody::Flags flags
19375 = (flag_preprocess_only ? Cody::Flags::None
19376 : Cody::Flags::NameOnly);
19377
19378 if (module->module_p
19379 && (module->is_partition () || module->exported_p))
19380 mapper->ModuleExport (module->get_flatname (), flags);
19381 else
19382 mapper->ModuleImport (module->get_flatname (), flags);
19383 }
19384 }
19385
19386 auto response = mapper->Uncork ();
19387 auto r_iter = response.begin ();
19388 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19389 {
19390 module_state *module = *iter;
19391
19392 if (module->is_direct () && !module->filename)
19393 {
19394 Cody::Packet const &p = *r_iter;
19395 ++r_iter;
19396
19397 module->set_filename (p);
19398 }
19399 }
19400 }
19401
19402 dump.pop (0);
19403
19404 timevar_stop (TV_MODULE_MAPPER);
19405
19406 if (mkdeps *deps = cpp_get_deps (reader))
19407 {
19408 /* Walk the module hash, informing the dependency machinery. */
19409 iterator end = modules_hash->end ();
19410 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19411 {
19412 module_state *module = *iter;
19413
19414 if (module->is_direct ())
19415 {
19416 if (module->is_module ()
19417 && (module->is_interface () || module->is_partition ()))
19418 deps_add_module_target (deps, module->get_flatname (),
19419 maybe_add_cmi_prefix (module->filename),
19420 module->is_header());
19421 else
19422 deps_add_module_dep (deps, module->get_flatname ());
19423 }
19424 }
19425 }
19426
19427 if (flag_header_unit && !flag_preprocess_only)
19428 {
19429 iterator end = modules_hash->end ();
19430 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
19431 {
19432 module_state *module = *iter;
19433 if (module->is_module ())
19434 {
19435 declare_module (module, cpp_main_loc (reader), true, NULL, reader);
19436 break;
19437 }
19438 }
19439 }
19440 #undef iterator
19441 }
19442
19443 /* VAL is a global tree, add it to the global vec if it is
19444 interesting. Add some of its targets, if they too are
19445 interesting. We do not add identifiers, as they can be re-found
19446 via the identifier hash table. There is a cost to the number of
19447 global trees. */
19448
19449 static int
19450 maybe_add_global (tree val, unsigned &crc)
19451 {
19452 int v = 0;
19453
19454 if (val && !(identifier_p (val) || TREE_VISITED (val)))
19455 {
19456 TREE_VISITED (val) = true;
19457 crc = crc32_unsigned (crc, fixed_trees->length ());
19458 vec_safe_push (fixed_trees, val);
19459 v++;
19460
19461 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPED))
19462 v += maybe_add_global (TREE_TYPE (val), crc);
19463 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPE_COMMON))
19464 v += maybe_add_global (TYPE_NAME (val), crc);
19465 }
19466
19467 return v;
19468 }
19469
19470 /* Initialize module state. Create the hash table, determine the
19471 global trees. Create the module for current TU. */
19472
19473 void
19474 init_modules (cpp_reader *reader)
19475 {
19476 /* PCH should not be reachable because of lang-specs, but the
19477 user could have overriden that. */
19478 if (pch_file)
19479 fatal_error (input_location,
19480 "C++ modules are incompatible with precompiled headers");
19481
19482 if (cpp_get_options (reader)->traditional)
19483 fatal_error (input_location,
19484 "C++ modules are incompatible with traditional preprocessing");
19485
19486 if (flag_preprocess_only)
19487 {
19488 cpp_options *cpp_opts = cpp_get_options (reader);
19489 if (flag_no_output
19490 || (cpp_opts->deps.style != DEPS_NONE
19491 && !cpp_opts->deps.need_preprocessor_output))
19492 {
19493 warning (0, flag_dump_macros == 'M'
19494 ? G_("macro debug output may be incomplete with modules")
19495 : G_("module dependencies require preprocessing"));
19496 if (cpp_opts->deps.style != DEPS_NONE)
19497 inform (input_location, "you should use the %<-%s%> option",
19498 cpp_opts->deps.style == DEPS_SYSTEM ? "MD" : "MMD");
19499 }
19500 }
19501
19502 /* :: is always exported. */
19503 DECL_MODULE_EXPORT_P (global_namespace) = true;
19504
19505 modules_hash = hash_table<module_state_hash>::create_ggc (31);
19506 vec_safe_reserve (modules, 20);
19507
19508 /* Create module for current TU. */
19509 module_state *current
19510 = new (ggc_alloc<module_state> ()) module_state (NULL_TREE, NULL, false);
19511 current->mod = 0;
19512 bitmap_set_bit (current->imports, 0);
19513 modules->quick_push (current);
19514
19515 gcc_checking_assert (!fixed_trees);
19516
19517 headers = BITMAP_GGC_ALLOC ();
19518
19519 if (note_includes)
19520 for (unsigned ix = 0; ix != note_includes->length (); ix++)
19521 {
19522 const char *hdr = (*note_includes)[ix];
19523 size_t len = strlen (hdr);
19524
19525 bool system = hdr[0] == '<';
19526 bool user = hdr[0] == '"';
19527 bool delimed = system || user;
19528
19529 if (len <= (delimed ? 2 : 0)
19530 || (delimed && hdr[len-1] != (system ? '>' : '"')))
19531 error ("invalid header name %qs", hdr);
19532
19533 hdr = canonicalize_header_name (delimed ? reader : NULL,
19534 0, !delimed, hdr, len);
19535 char *path = XNEWVEC (char, len + 1);
19536 memcpy (path, hdr, len);
19537 path[len+1] = 0;
19538
19539 (*note_includes)[ix] = path;
19540 }
19541
19542 dump.push (NULL);
19543
19544 /* Determine lazy handle bound. */
19545 {
19546 unsigned limit = 1000;
19547 #if HAVE_GETRLIMIT
19548 struct rlimit rlimit;
19549 if (!getrlimit (RLIMIT_NOFILE, &rlimit))
19550 {
19551 lazy_hard_limit = (rlimit.rlim_max < 1000000
19552 ? unsigned (rlimit.rlim_max) : 1000000);
19553 lazy_hard_limit = (lazy_hard_limit > LAZY_HEADROOM
19554 ? lazy_hard_limit - LAZY_HEADROOM : 0);
19555 if (rlimit.rlim_cur < limit)
19556 limit = unsigned (rlimit.rlim_cur);
19557 }
19558 #endif
19559 limit = limit > LAZY_HEADROOM ? limit - LAZY_HEADROOM : 1;
19560
19561 if (unsigned parm = param_lazy_modules)
19562 {
19563 if (parm <= limit || !lazy_hard_limit || !try_increase_lazy (parm))
19564 lazy_limit = parm;
19565 }
19566 else
19567 lazy_limit = limit;
19568 }
19569
19570 if (dump ())
19571 {
19572 verstr_t ver;
19573 version2string (MODULE_VERSION, ver);
19574 dump ("Source: %s", main_input_filename);
19575 dump ("Compiler: %s", version_string);
19576 dump ("Modules: %s", ver);
19577 dump ("Checking: %s",
19578 #if CHECKING_P
19579 "checking"
19580 #elif ENABLE_ASSERT_CHECKING
19581 "asserting"
19582 #else
19583 "release"
19584 #endif
19585 );
19586 dump ("Compiled by: "
19587 #ifdef __GNUC__
19588 "GCC %d.%d, %s", __GNUC__, __GNUC_MINOR__,
19589 #ifdef __OPTIMIZE__
19590 "optimizing"
19591 #else
19592 "not optimizing"
19593 #endif
19594 #else
19595 "not GCC"
19596 #endif
19597 );
19598 dump ("Reading: %s", MAPPED_READING ? "mmap" : "fileio");
19599 dump ("Writing: %s", MAPPED_WRITING ? "mmap" : "fileio");
19600 dump ("Lazy limit: %u", lazy_limit);
19601 dump ("Lazy hard limit: %u", lazy_hard_limit);
19602 dump ("");
19603 }
19604
19605 /* Construct the global tree array. This is an array of unique
19606 global trees (& types). Do this now, rather than lazily, as
19607 some global trees are lazily created and we don't want that to
19608 mess with our syndrome of fixed trees. */
19609 unsigned crc = 0;
19610 vec_alloc (fixed_trees, 200);
19611
19612 dump () && dump ("+Creating globals");
19613 /* Insert the TRANSLATION_UNIT_DECL. */
19614 TREE_VISITED (DECL_CONTEXT (global_namespace)) = true;
19615 fixed_trees->quick_push (DECL_CONTEXT (global_namespace));
19616 for (unsigned jx = 0; global_tree_arys[jx].first; jx++)
19617 {
19618 const tree *ptr = global_tree_arys[jx].first;
19619 unsigned limit = global_tree_arys[jx].second;
19620
19621 for (unsigned ix = 0; ix != limit; ix++, ptr++)
19622 {
19623 !(ix & 31) && dump ("") && dump ("+\t%u:%u:", jx, ix);
19624 unsigned v = maybe_add_global (*ptr, crc);
19625 dump () && dump ("+%u", v);
19626 }
19627 }
19628 global_crc = crc32_unsigned (crc, fixed_trees->length ());
19629 dump ("") && dump ("Created %u unique globals, crc=%x",
19630 fixed_trees->length (), global_crc);
19631 for (unsigned ix = fixed_trees->length (); ix--;)
19632 TREE_VISITED ((*fixed_trees)[ix]) = false;
19633
19634 dump.pop (0);
19635
19636 if (!flag_module_lazy)
19637 /* Get the mapper now, if we're not being lazy. */
19638 get_mapper (cpp_main_loc (reader));
19639
19640 if (!flag_preprocess_only)
19641 {
19642 pending_table = new pendset::hash (EXPERIMENT (1, 400));
19643
19644 entity_map = new entity_map_t (EXPERIMENT (1, 400));
19645 vec_safe_reserve (entity_ary, EXPERIMENT (1, 400));
19646 }
19647
19648 #if CHECKING_P
19649 note_defs = note_defs_table_t::create_ggc (1000);
19650 #endif
19651
19652 if (flag_header_unit && cpp_get_options (reader)->preprocessed)
19653 begin_header_unit (reader);
19654
19655 /* Collect here to make sure things are tagged correctly (when
19656 aggressively GC'd). */
19657 ggc_collect ();
19658 }
19659
19660 /* If NODE is a deferred macro, load it. */
19661
19662 static int
19663 load_macros (cpp_reader *reader, cpp_hashnode *node, void *)
19664 {
19665 location_t main_loc
19666 = MAP_START_LOCATION (LINEMAPS_ORDINARY_MAP_AT (line_table, 0));
19667
19668 if (cpp_user_macro_p (node)
19669 && !node->value.macro)
19670 {
19671 cpp_macro *macro = cpp_get_deferred_macro (reader, node, main_loc);
19672 dump () && dump ("Loaded macro #%s %I",
19673 macro ? "define" : "undef", identifier (node));
19674 }
19675
19676 return 1;
19677 }
19678
19679 /* At the end of tokenizing, we no longer need the macro tables of
19680 imports. But the user might have requested some checking. */
19681
19682 void
19683 maybe_check_all_macros (cpp_reader *reader)
19684 {
19685 if (!warn_imported_macros)
19686 return;
19687
19688 /* Force loading of any remaining deferred macros. This will
19689 produce diagnostics if they are ill-formed. */
19690 unsigned n = dump.push (NULL);
19691 cpp_forall_identifiers (reader, load_macros, NULL);
19692 dump.pop (n);
19693 }
19694
19695 /* Write the CMI, if we're a module interface. */
19696
19697 void
19698 finish_module_processing (cpp_reader *reader)
19699 {
19700 if (header_module_p ())
19701 module_kind &= ~MK_EXPORTING;
19702
19703 if (!modules || !(*modules)[0]->name)
19704 {
19705 if (flag_module_only)
19706 warning (0, "%<-fmodule-only%> used for non-interface");
19707 }
19708 else if (!flag_syntax_only)
19709 {
19710 int fd = -1;
19711 int e = ENOENT;
19712
19713 timevar_start (TV_MODULE_EXPORT);
19714
19715 /* Force a valid but empty line map at the end. This simplifies
19716 the line table preparation and writing logic. */
19717 linemap_add (line_table, LC_ENTER, false, "", 0);
19718
19719 /* We write to a tmpname, and then atomically rename. */
19720 const char *path = NULL;
19721 char *tmp_name = NULL;
19722 module_state *state = (*modules)[0];
19723
19724 unsigned n = dump.push (state);
19725 state->announce ("creating");
19726 if (state->filename)
19727 {
19728 size_t len = 0;
19729 path = maybe_add_cmi_prefix (state->filename, &len);
19730 tmp_name = XNEWVEC (char, len + 3);
19731 memcpy (tmp_name, path, len);
19732 strcpy (&tmp_name[len], "~");
19733
19734 if (!errorcount)
19735 for (unsigned again = 2; ; again--)
19736 {
19737 fd = open (tmp_name,
19738 O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC | O_BINARY,
19739 S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH);
19740 e = errno;
19741 if (fd >= 0 || !again || e != ENOENT)
19742 break;
19743 create_dirs (tmp_name);
19744 }
19745 dump () && dump ("CMI is %s", path);
19746 }
19747
19748 if (errorcount)
19749 warning_at (state->loc, 0, "not writing module %qs due to errors",
19750 state->get_flatname ());
19751 else
19752 {
19753 elf_out to (fd, e);
19754 if (to.begin ())
19755 {
19756 auto loc = input_location;
19757 /* So crashes finger point the module decl. */
19758 input_location = state->loc;
19759 state->write (&to, reader);
19760 input_location = loc;
19761 }
19762 if (to.end ())
19763 {
19764 /* Some OS's do not replace NEWNAME if it already
19765 exists. This'll have a race condition in erroneous
19766 concurrent builds. */
19767 unlink (path);
19768 if (rename (tmp_name, path))
19769 {
19770 dump () && dump ("Rename ('%s','%s') errno=%u", errno);
19771 to.set_error (errno);
19772 }
19773 }
19774
19775 if (to.get_error ())
19776 {
19777 error_at (state->loc, "failed to write compiled module: %s",
19778 to.get_error (state->filename));
19779 state->note_cmi_name ();
19780 }
19781 }
19782
19783 if (!errorcount)
19784 {
19785 auto *mapper = get_mapper (cpp_main_loc (reader));
19786
19787 mapper->ModuleCompiled (state->get_flatname ());
19788 }
19789 else if (path)
19790 {
19791 /* We failed, attempt to erase all evidence we even tried. */
19792 unlink (tmp_name);
19793 unlink (path);
19794 XDELETEVEC (tmp_name);
19795 }
19796
19797 dump.pop (n);
19798 timevar_stop (TV_MODULE_EXPORT);
19799
19800 ggc_collect ();
19801 }
19802
19803 if (modules)
19804 {
19805 unsigned n = dump.push (NULL);
19806 dump () && dump ("Imported %u modules", modules->length () - 1);
19807 dump () && dump ("Containing %u clusters", available_clusters);
19808 dump () && dump ("Loaded %u clusters (%u%%)", loaded_clusters,
19809 (loaded_clusters * 100 + available_clusters / 2) /
19810 (available_clusters + !available_clusters));
19811 dump.pop (n);
19812 }
19813
19814 if (modules && !header_module_p ())
19815 {
19816 /* Determine call_init_p. We need the same bitmap allocation
19817 scheme as for the imports member. */
19818 function_depth++; /* Disable GC. */
19819 bitmap indirect_imports (BITMAP_GGC_ALLOC ());
19820
19821 /* Because indirect imports are before their direct import, and
19822 we're scanning the array backwards, we only need one pass! */
19823 for (unsigned ix = modules->length (); --ix;)
19824 {
19825 module_state *import = (*modules)[ix];
19826
19827 if (!import->is_header ()
19828 && !bitmap_bit_p (indirect_imports, ix))
19829 {
19830 /* Everything this imports is therefore indirectly
19831 imported. */
19832 bitmap_ior_into (indirect_imports, import->imports);
19833 /* We don't have to worry about the self-import bit,
19834 because of the single pass. */
19835
19836 import->call_init_p = true;
19837 num_init_calls_needed++;
19838 }
19839 }
19840 function_depth--;
19841 }
19842 }
19843
19844 void
19845 fini_modules ()
19846 {
19847 /* We're done with the macro tables now. */
19848 vec_free (macro_exports);
19849 vec_free (macro_imports);
19850 headers = NULL;
19851
19852 /* We're now done with everything but the module names. */
19853 set_cmi_repo (NULL);
19854 if (mapper)
19855 {
19856 timevar_start (TV_MODULE_MAPPER);
19857 module_client::close_module_client (0, mapper);
19858 mapper = nullptr;
19859 timevar_stop (TV_MODULE_MAPPER);
19860 }
19861 module_state_config::release ();
19862
19863 #if CHECKING_P
19864 note_defs = NULL;
19865 #endif
19866
19867 if (modules)
19868 for (unsigned ix = modules->length (); --ix;)
19869 if (module_state *state = (*modules)[ix])
19870 state->release ();
19871
19872 /* No need to lookup modules anymore. */
19873 modules_hash = NULL;
19874
19875 /* Or entity array. We still need the entity map to find import numbers. */
19876 delete entity_ary;
19877 entity_ary = NULL;
19878
19879 /* Or remember any pending entities. */
19880 delete pending_table;
19881 pending_table = NULL;
19882
19883 /* Or any attachments -- Let it go! */
19884 delete attached_table;
19885 attached_table = NULL;
19886
19887 /* Allow a GC, we've possibly made much data unreachable. */
19888 ggc_collect ();
19889 }
19890
19891 /* If CODE is a module option, handle it & return true. Otherwise
19892 return false. For unknown reasons I cannot get the option
19893 generation machinery to set fmodule-mapper or -fmodule-header to
19894 make a string type option variable. */
19895
19896 bool
19897 handle_module_option (unsigned code, const char *str, int)
19898 {
19899 auto hdr = CMS_header;
19900
19901 switch (opt_code (code))
19902 {
19903 case OPT_fmodule_mapper_:
19904 module_mapper_name = str;
19905 return true;
19906
19907 case OPT_fmodule_header_:
19908 {
19909 if (!strcmp (str, "user"))
19910 hdr = CMS_user;
19911 else if (!strcmp (str, "system"))
19912 hdr = CMS_system;
19913 else
19914 error ("unknown header kind %qs", str);
19915 }
19916 /* Fallthrough. */
19917
19918 case OPT_fmodule_header:
19919 flag_header_unit = hdr;
19920 flag_modules = 1;
19921 return true;
19922
19923 case OPT_flang_info_include_translate_:
19924 vec_safe_push (note_includes, str);
19925 return true;
19926
19927 default:
19928 return false;
19929 }
19930 }
19931
19932 /* Set preprocessor callbacks and options for modules. */
19933
19934 void
19935 module_preprocess_options (cpp_reader *reader)
19936 {
19937 gcc_checking_assert (!lang_hooks.preprocess_undef);
19938 if (modules_p ())
19939 {
19940 auto *cb = cpp_get_callbacks (reader);
19941
19942 cb->translate_include = maybe_translate_include;
19943 cb->user_deferred_macro = module_state::deferred_macro;
19944 if (flag_header_unit)
19945 {
19946 /* If the preprocessor hook is already in use, that
19947 implementation will call the undef langhook. */
19948 if (cb->undef)
19949 lang_hooks.preprocess_undef = module_state::undef_macro;
19950 else
19951 cb->undef = module_state::undef_macro;
19952 }
19953 auto *opt = cpp_get_options (reader);
19954 opt->module_directives = true;
19955 opt->main_search = cpp_main_search (flag_header_unit);
19956 }
19957 }
19958
19959 #include "gt-cp-module.h"