Merge tpm 2017/12/22 v1
# gpg: Signature made Fri 22 Dec 2017 20:03:37 GMT
# gpg: using RSA key 0x75AD65802A0B4211
# gpg: Good signature from "Stefan Berger <stefanb@linux.vnet.ibm.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B818 B9CA DF90 89C2 D5CE C66B 75AD 6580 2A0B 4211
* remotes/stefanberger/tags/pull-tpm-2017-12-22-1:
acpi: Update TPM2 ACPI table to more recent specs
tpm: Implement tpm_sized_buffer_reset
tpm_tis: merge r/w_offset into rw_offset
tpm_tis: move r/w_offsets to TPMState
tpm_tis: merge read and write buffer into single buffer
tpm_tis: move buffers from localities into common location
tpm_tis: remove TPMSizeBuffer usage
tpm_tis: limit size of buffer from backend
tpm_tis: convert uint32_t to size_t
tpm_emulator: Add a caching layer for the TPM Established flag
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# gpg: Signature made Fri 22 Dec 2017 02:12:29 GMT
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
qemu-doc: Update the deprecation information of -tftp, -bootp, -redir and -smb
qemu-doc: The "-net nic" option can be used with "netdev=...", too
net: Remove the legacy "-net channel" parameter
net: remove unused compute_mcast_idx() function
rtl8139: use inline net_crc32() and bitshift instead of compute_mcast_idx()
ne2000: use inline net_crc32() and bitshift instead of compute_mcast_idx()
ftgmac100: use inline net_crc32() and bitshift instead of compute_mcast_idx()
lan9118: use inline net_crc32() and bitshift instead of compute_mcast_idx()
opencores_eth: use inline net_crc32() and bitshift instead of compute_mcast_idx()
eepro100: use inline net_crc32() and bitshift instead of compute_mcast_idx()
sungem: fix multicast filter CRC calculation
sunhme: switch sunhme over to use net_crc32_le()
eepro100: switch eepro100 e100_compute_mcast_idx() over to use net_crc32()
pcnet: switch pcnet over to use net_crc32_le()
net: introduce net_crc32_le() function
net: move CRC32 calculation from compute_mcast_idx() into its own net_crc32() function
e1000: Separate TSO and non-TSO contexts, fixing UDP TX corruption
e1000, e1000e: Move per-packet TX offload flags out of context state
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some cleanup, and allows SR to be moved from any addressing mode.
Previous code was wrong for coldfire: coldfire also allows to
use addressing mode to set SR/CCR. It only supports Data register
to get SR/CCR (move from)
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-15-laurent@vivier.eu>
As gen_helper_get_ccr() is able to compute CCR from cc_op and
flags, we don't need to flush flags before to call it.
flush_flags() and get_ccr() use COMPUTE_CCR() to compute
flags. get_ccr() computes CCR value,
whereas flush_flags update live cc_op and flags.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180104012913.30763-3-laurent@vivier.eu>
If the script is run with a core (no running process), it produces an
error:
(gdb) dump-guest-memory /tmp/vmcore X86_64
guest RAM blocks:
target_start target_end host_addr message count
---------------- ---------------- ---------------- ------- -----
0000000000000000 00000000000a0000 00007f7935800000 added 1
00000000000a0000 00000000000b0000 00007f7934200000 added 2
00000000000c0000 00000000000ca000 00007f79358c0000 added 3
00000000000ca000 00000000000cd000 00007f79358ca000 joined 3
00000000000cd000 00000000000e8000 00007f79358cd000 joined 3
00000000000e8000 00000000000f0000 00007f79358e8000 joined 3
00000000000f0000 0000000000100000 00007f79358f0000 joined 3
0000000000100000 0000000080000000 00007f7935900000 joined 3
00000000fd000000 00000000fe000000 00007f7934200000 added 4
00000000fffc0000 0000000100000000 00007f7935600000 added 5
Python Exception <class 'gdb.error'> You can't do that without a process to debug.:
Error occurred in Python command: You can't do that without a process
to debug.
Replace the object_resolve_path_type() function call with a local
volatile variable.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Use the function argument "name" instead of hardcoded
"VMCOREINFO". All callers use "VMCOREINFO" as argument, so this isn't
an exposed bug, thankfully.
Simplify a little bit the code while touching this.
Suggested-by: Andrew Jones <drjones@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
We already handle this in the backends, and the lifetime datum
for the TCGOp is already large enough.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We had two fields specific to INDEX_op_call. Rename these and
add some macros so that the fields may be reused for other opcodes.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.
Use QTAILQ to link the ops together.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These are now trivial sets and tests against NULL. Unwrap.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cpu_restore_state officially supports being passed an address it can't
resolve the state for. As a result the checks in the helpers are
superfluous and can be removed. This makes the code consistent with
other users of cpu_restore_state.
Of course this does nothing to address what to do if cpu_restore_state
can't resolve the state but so far it seems this is handled elsewhere.
The change was made with included coccinelle script.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
[rth: Fixed up comment indentation. Added second hunk to script to
combine cpu_restore_state and cpu_loop_exit.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
More recent specs of the TPM2 ACPI table add fields for the log area
start address and the log area minimum size, which we already use
for the TCPA table.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The bdrv_reopen*() implementation doesn't like it if the graph is
changed between queuing nodes for reopen and actually reopening them
(one of the reasons is that queuing can be recursive).
So instead of draining the device only in bdrv_reopen_multiple(),
require that callers already drained all affected nodes, and assert this
in bdrv_reopen_queue().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Since commit bde70715, base is the only node that is reopened in
commit_start(). This means that the code, which still involves an
explicit BlockReopenQueue, can now be simplified by using bdrv_reopen().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
We need to remember how many of the drain sections in which a node is
were recursive (i.e. subtree drain rather than node drain), so that they
can be correctly applied when children are added or removed during the
drained section.
With this change, it is safe to modify the graph even inside a
bdrv_subtree_drained_begin/end() section.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If bdrv_do_drained_begin/end() are called in coroutine context, they
first use a BH to get out of the coroutine context. Call some existing
tests again from a coroutine to cover this code path.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_drained_begin() waits for the completion of requests in the whole
subtree, but it only actually keeps its immediate bs parameter quiesced
until bdrv_drained_end().
Add a version that keeps the whole subtree drained. As of this commit,
graph changes cannot be allowed during a subtree drained section, but
this will be fixed soon.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is in preparation for subtree drains, i.e. drained sections that
affect not only a single node, but recursively all child nodes, too.
Calling the parent callbacks for drain is pointless when we just came
from that parent node recursively and leads to multiple increases of
bs->quiesce_counter in a single drain call. Don't do it.
In order for this to work correctly, the parent callback must be called
for every bdrv_drain_begin/end() call, not only for the outermost one:
If we have a node N with two parents A and B, recursive draining of A
should cause the quiesce_counter of B to increase because its child N is
drained independently of B. If now B is recursively drained, too, A must
increase its quiesce_counter because N is drained independently of A
only now, even if N is going from quiesce_counter 1 to 2.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_do_drained_begin() restricts the call of parent callbacks and
aio_disable_external() to the outermost drain section, but the block
driver callbacks are always called. bdrv_do_drained_end() must match
this behaviour, otherwise nodes stay drained even if begin/end calls
were balanced.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Block jobs are already paused using the BdrvChildRole drain callbacks,
so we don't need an additional block_job_pause_all() call.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Block jobs already paused themselves when their main BlockBackend
entered a drained section. This is not good enough: We also want to
pause a block job and may not submit new requests if, for example, the
mirror target node should be drained.
This implements .drained_begin/end callbacks in child_job in order to
consider all block nodes related to the job, and removes the
BlockBackend callbacks which are unnecessary now because the root of the
job main BlockBackend is always referenced with a child_job, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This is currently only working correctly for bdrv_drain(), not for
bdrv_drain_all(). Leave a comment for the drain_all case, we'll address
it later.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The existing test is for bdrv_drain_all_begin/end() only. Generalise the
test case so that it can be run for the other variants as well. At the
moment this is only bdrv_drain_begin/end(), but in a while, we'll add
another one.
Also, add a backing file to the test node to test whether the operations
work recursively.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>