That Rand paper is sort of a joke. It's positing an idea of "certainty" we don't apply to any other industries anywhere. The same logic would argue that human drivers are not known with certainty to be safe, that industrial accidents are not proven not to happen, that airline autopilots aren't known with certainty to be safe, etc..
The goal isn't certainty. It's "better". It seems like it might be "better" already.
The Rand study is not about "certainty", but about determining the safety of self-driving cars and the number of miles needed to do it. Are you confusing the colloquial meaning of "certainty" (as in "I'm sure about it") with the meaning of "certainty", and particulary uncertainty in probabilities and statistics?
Is there some other published work that you prefer over the Rand study?
"Certainty" appears in the abstract! But yes, the article is talking about the work required to derive a 95%-confident result of a specific improvement. But that's backwards, and not how one does statistics. You measure an effect, and then compute its confidence.
And it's spun anyway, since most of those "trillions of miles!" numbers reflect things like proving 95% confidence of 100%+ improvements in safety. When all we really want to know to release this to the public is 95% confidence of 0% improvement (i.e. "is it not worse?").
The goal isn't certainty. It's "better". It seems like it might be "better" already.