When I start a domain with the option vtpm = [ 'instance=1, backend=0' ]
vtpm_manager on dom0 correctly starts a new vtpmd process with the
vtpmd clear pvm 1
I can accomplish all tpm operations on this vtpm from domU. I can see
the instance is recorded to vtpm database correctly:
#Database for VM to vTPM association
#1st column: domain name
#2nd column: TPM instance number
However when I restart or shutdown the domain and start again, vtpmd
starts a new vtpm instance with clear option again, which I think is
wrong. So all my previously created keys are lost on new instance,
because previous SRK key is lost.
So the most important question follows: How do I save state of a vtpm
across domU reboots?
I checked the code for this clear parameter, and my understanding is as
vtpm is based on tpm_emulator and tpm_emulator have 3 states:
deactivate, save, clear. Whenever I start a new domain, xen starts vtpm
with clear parameter.
vtpm_create_instance() creates a new vtpm instance and determines what
to do with it with the return value of vtpm_get_create_reason(), which
returns the value of xenbus/resume. vtpm_create_instance() then sends a
command to the tpm with a fifo about whether to resume or start a vtpm
instance. When the command sent is start, vtpm just clears all the PCR's
and keys on the existing vtpm instance.
Is this vtpm_resume something related to domain save/restore and
suspend/resume therefore completely irrelevant to the subject? (like the
backend driver restarted all frontend connections must be resumed) I
assume this because I saw the code about netfront and blkfront driver
codes, which includes this resume command sended with xenbus. But the
tpm frontend xenu driver does not include information abut this.
How do I save state of the vtpm across domU shutdowns?