It doesn't. We've not been able to prove humans have subjective experiences either. LLMs display emotions in the way that actually matters - functionally.
If "x doesn't tell us y" is compatible with "x increases the likelihood of y but not to a point of certainty" then you would have to agree for just about any typical controlled trial or experimental finding "x doesn't tell us y". "Randomized controlled trials that find that SSRIs treat depression don't tell us that SSRIs effectively treat depression"
> none of this tells us whether language models actually feel anything or have subjective experiences
contradicts the statement from the model card above