SHA256
1
0
forked from pool/qemu
qemu/gcc-gcse-volatile.patch

63 lines
1.7 KiB
Diff

Hello,
in the appended testcase (obtained from linux kernel), the following
bug occurred: load motion decides to optimize non-volatile instance
of src_pte, and not to take volatile instance into account. When
checking for availability, we however consider these instances to be equal,
which causes us to believe that the second load of pte is redundant.
The patch fixes it.
Zdenek
typedef struct { unsigned long pte_low; } pte_t;
static inline void clear_bit(int nr, volatile void * addr)
{
__asm__ __volatile__(
"btrl %1,%0"
:"=m" ((*(volatile long *) addr))
:"Ir" (nr));
}
static inline void ptep_set_wrprotect(pte_t *ptep) { clear_bit(1, ptep); }
int copy_page_range(void)
{
pte_t * src_pte;
pte_t pte = *src_pte;
if (pte.pte_low) {
ptep_set_wrprotect(src_pte);
pte = *src_pte;
}
foo (pte);
return 0;
}
Changelog:
* gcse.c (expr_equiv_p): Don't consider anything to be equal to
volatile mem.
Index: gcse.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/gcse.c,v
retrieving revision 1.222.2.18
diff -c -3 -p -r1.222.2.18 gcse.c
*** gcc/gcse.c 15 Aug 2003 13:21:01 -0000 1.222.2.18
--- gcc/gcse.c 29 Aug 2003 22:33:15 -0000
*************** expr_equiv_p (x, y)
*** 1844,1849 ****
--- 1844,1853 ----
due to it being set with the different alias set. */
if (MEM_ALIAS_SET (x) != MEM_ALIAS_SET (y))
return 0;
+
+ /* A volatile mem should not be considered equivalent to any other. */
+ if (MEM_VOLATILE_P (x) || MEM_VOLATILE_P (y))
+ return 0;
break;
/* For commutative operations, check both orders. */