Re: [PATCH v10 049/108] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
From: Isaku Yamahata
Date: Thu Nov 17 2022 - 14:31:34 EST
On Wed, Nov 16, 2022 at 11:58:46AM +0000,
"Huang, Kai" <kai.huang@xxxxxxxxx> wrote:
>
> >
> > +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp)
> > +{
> > + gfp &= ~__GFP_ZERO;
> > + sp->private_spt = (void *)__get_free_page(gfp);
> > + if (!sp->private_spt)
> > + return -ENOMEM;
> > + return 0;
> > +}
> > +
> >
> [...]
>
> > @@ -1238,6 +1408,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > is_large_pte(iter.old_spte)) {
> > if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
> > break;
> > + /*
> > + * TODO: large page support.
> > + * Doesn't support large page for TDX now
> > + */
> > + KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm);
> > +
> >
>
> So large page is not supported for private page, ...
>
> > /*
> > * The iter must explicitly re-read the spte here
> > @@ -1480,6 +1656,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp, union kvm_mm
> >
> > sp->role = role;
> > sp->spt = (void *)__get_free_page(gfp);
> > + if (kvm_mmu_page_role_is_private(role)) {
> > + if (kvm_alloc_private_spt_for_split(sp, gfp)) {
> > + free_page((unsigned long)sp->spt);
> > + sp->spt = NULL;
> > + }
> > + }
>
> ... then I don't think eager splitting could happen for private mapping?
>
> If so, should we just KVM_BUG_ON() if role is private?
Right, will remove this part from this patch series and move it to large page
support series.
--
Isaku Yamahata <isaku.yamahata@xxxxxxxxx>