Took a quick look at the Table of contents and what does this have that Suttons book, Spinning up RL by OpenAI, and Lillian Wengs blog doesn't cover?
I can see that it includes Vanishing gradients, and Imitation Learning (BC and max Ent IRL) but those two things are more Deep Learning related, and more towards Deep RL.
Not sure if I would put Suttons book on "Application aimed". It have a fair share of theory too.
And a second skim of the (preliminary) document above just confirms that the "theory" you speak of is not that far from what already exist on course lectures (Such as David Silver's lectures) and Suttons book. This one just have a crap ton of mathematical notation and way less text than other books. (although I assume those text will be added later)
When a lot of RL papers already spend crap-ton of their content on mathematical theory, and proofing, while if you peek at the actual implementation, have a lot of unmentioned tinkering (normalize this, hyperparam opt that, subsample this) and if those things are removed, have very modest-at-best result, I don't think a paper that crap out more math notation is needed or good for the field.
I made the comment because I don't see the point of adding another learning material, theory focused or not, when there's better avenue of exploration to make the field of RL easier to access/understand
I can see that it includes Vanishing gradients, and Imitation Learning (BC and max Ent IRL) but those two things are more Deep Learning related, and more towards Deep RL.